I know the meme about gaming on Intel graphics, but if they implement Adaptive Sync *combined* with some sort of low framerate compensation, it would make gaming on Intel IGP much less hilarious. Can Intel license FreeSync without using AMD GPU inside? I know FreeSync worked on KLG, but that had an AMD GPU.
More specifically, Freesync 1 just AMD's implementation of DisplayPort Adaptive Sync. Intel has to build their own implementation in their display controller and driver stack, but past that all the signaling aspects to the monitor are standardized.
Ryan, I thought that was reversed, that AMD worked on adding Adaptive Sync into the specs and worked on making sure it's implementation matched what they were doing with Freesync.
IIRC it's a bit of both. Adaptive Sync was present in the eDP standard for things like laptop monitors or tablets as a power saving feature. AMD brought this to the desktop side of things to use for variable framerates in games and helped the standard bring it over too.
Freesync utilises adaptive sync. Adaptive Sync is the technology on the screen side, a screen must support adaptive sync to be used by Freesync. Freesync is just the AMD side of it. If a adaptive sync capable screen is detected you can turn on freesync in drivers. Adaptive sync was a standard written up and proposed by AMD and given to I forget who it is now, Displayport group direct or to Vesa. They accepted it and implemented it pretty quickly but as with all things standards take a long time for get integrated into the next cycle or two of products.
Anyone can use Adaptive sync panels, no one but AMD can use freesync as it's something specific to their hardware and drivers. intel will produce their own specific driver/implementation and just connected to adaptive sync panels in the same way.
So Intel is going straight to chiplets on interposer, it will be interesting to see if AMD adopts this with Zen 3 or waits until Zen 4. Anyway its nice to see competition doing its job.
We don't know yet exactly how much logic Intel is moving to the interposer. It looks awesome for mobile form factors! I think they will face some challenges to bring it to high-TDP desktop solutions, though.
What I am saying with those links is that AMD has already evaluated the use of active interposers and is the company that is articulating a data protocol for active interposer routing so that it doesn't matter what each chiplet does on its own data routing, so that any chiplet could be attached to an active interposer and work. Further, in the 2017 white paper, it directly shows the reason that AMD hasn't yet adopted the active interposer is due to costs. Passive interposers can be used, but don't offer the benefit of active interposers. But, the cost of an active interposer is the same as doing a monolithic chip once the active interposer is produced on the 32nm or smaller active interposer. As such, adoption did not yet make sense until the costs are reduced. https://spectrum.ieee.org/tech-talk/semiconductors...
The earlier ones show AMD did research into the optimal topology for active interposers in 2014 and 2015. It shows the optimal core count for the benefit, latencies, etc.
Now, if you look at packaging solutions, say from GF, you find that 2.5D and 3D integration is only available on 32nm and 28nm processes, with 14nm coming soon. https://www.globalfoundries.com/sites/default/file... I bring up GF doing the packaging as, due to the WSA, even if they cost more than competitors, being able to potentially count the wafers used for the active interposers against the WSA would reduce the fines payed under that contract for doing fabrication at TSMC, making it potentially, on net, cheaper.
Now, there is an article today from Anand saying Intel is releasing the B365 chipset on 22nm. ( https://www.anandtech.com/show/13714/intel-adds-b3... ) Intel was forced to move chipsets back to 22nm due to the 10nm process not being ready for mass deployment which caused a shortage of 14nm capacity. This means Intel could not shut down their 22nm fabs to cut that capacity. As such, a fab needs to stay as close to capacity as possible or else it bleeds money (hence why AMD eventually went fabless). So, Intel using 22nm fabs to do the 22nm active interposer is just Intel making lemonade on tech that few others adopted in the industry yet due to the basis of costs. If you go back to the cost study AMD did, the active interposer at 20nm is around the cost of doing a monolithic die, which is what Intel has done to date. So it isn't really costing them more to add in the active interposer at all, while saving expenditures by keeping the 22nm fabs full of jobs, while having awesome yields on the active interposers due to it being a very mature node with low defect rates, etc. If you examine the amount of area AMD estimated is needed for the logic routers, you can see only 1%-10% area on the active interposer is needed to achieve the goal, meaning the chances a critical defect hits the active interposer is very low.
But, as to it making AMD do it, that is the wrong thinking. AMD already plans to adopt it, just not until the costs of doing so are lower. They will check the IC Knowledge lists to see when costs make sense.
But, if you look at all that data I provided, AMD has all the solutions to the problems encountered with using an active interposer. All they are waiting for is it to be cost effective. Intel isn't leading anything here, other than doing it because they need to justify and subsidize having to keep certain 22nm fabs open because of their chipsets due to the delays in 10nm. Doesn't take a genius to figure it out, just takes someone paying attention to the tea leaves.
Also, I forgot to mention that the 2.5D integration of HBM Phy onto a stitched interposer was accomplished last year. This suggests that HBM could be added to an active interposer when AMD eventually does adopt an active interposer. Meanwhile, with only 1-10% of the active interposer being used according to their papers, it leaves room for in the future having elements of the I/O chip being moved to the active interposer as an additional way that things easily could develop. The question is what benefit it would be to produce what on the 32nm or 28nm nodes over having their I/O disintegrated die on 14nm. But I would bet AMD has an idea of what would be better where and will consider it when eventually adopting, considered the detailed paper on routing topology of active interposers in that group of links.
Thank you for the detailed response its very informative however its besides my comment, IF Intel ships its chiplet on interposer design in 2019 they will be first (regardless of the reasons or economics behind it), as for AMD not doing it now it's again besides my comment, I wandered if they will do this for ZEN 3 or 4 not ZEN2, ZEN 3 will launch in 2020 when (as far as I know) the WSA expires so what GF does or doesn't do is not that relevant (I would be curious to know about tsmc's or samsung's packaging/interposer offerings). The result (if they implement it in ZEN3/4) will come down to cost (as you point out), socket compatibility (can it be done on the current server socket?) and planning (ZEN3 design is almost done right now so even if the interposer cost is down AMD can't implement it if they didn't predict this one year ago).
Also Anandtech, I am not versed in web design, but is it really that hard to have a comment section that doesn't utterly suck? Is it really that hard to introduce an edit function, or at least a delete one?(an upvote/donvote system would be asking for too much).
Forgive me, many think that Intel is showing through using it that they are significantly ahead in the field of active interposers and 2.5D and 3D chiplet integration, meaning that it would take years for other chip designers to catch up, which isn't the case. So I do apologize for that assumption in regards to you.
What those do show is AMD does have plans to do so in the future, it is just a matter of timing. On the socket part, there is a chance they are introducing a new socket in order to support PCIe 4.0 on the upcoming Zen 2 chips, which comes from analyzing their wording from the Next Horizon event on Nov. 6th. Meanwhile, we know that PCIe 5.0 will be finalized potentially in the first half of next year and AMD did mention DDR5 potentially being available for Epyc 3 chips based on Zen 3, but that mainstream chips will not support DDR5 in 2020 (leaves open that TR HEDT platforms may or may not get DDR5 support at that time). Intel has not provided information on when PCIe 4.0, 5.0, or DDR5 will be supported. As such, though, AMD may have backwards compatibility on the CPUs for socket compatibility, but may require a new socket for the new boards containing the new features, which I think is understandable to many in the server and workstation spheres. It is also the reason I may wait for 2020 to upgrade my 1950X rather than next year (if you are going to buy a new board, and there is a chance that board won't contain the new feature sets that release that year, waiting one more year is fine IF your workloads won't suffer for the wait).
But, I read somewhere Intel hinted at 1GB of RAM being integrated with the active interposer processors, acting as an L4 cache. Although not novel (those articles I gave for AMD whitepapers from 2014 and 2015 specifically dealt with on package integration of memory and latencies involved, suggesting we could see some type of 3d memory solution integrated when AMD does incorporate an active interposer), seeing the latencies involved with Crystalwell, which was the eDRAM on Broadwell, suggests that Intel will get a significant uplift in certain workloads, as well as keeping the chip primed and having to go off chip less often for memory calls, which is fantastic. Intel also kept the power delivery part under wraps, which is something that does excite me, but we were given no information about it (possibly bringing FIVR back, which was rumored for Ice and Tiger lake anyways).
Also, on compatibility, part of the reason I gave the article discussing data protocols for active interposers is that the routing is chiplet agnostic, meaning you can integrate parts that have their own internal routing and not effect it. Then just comes down to appropriate socket wiring, which I mentioned they may need new sockets for these features, while just maintaining drop in compatibility for consumer side.
just clarifying, do you mean to use such methods, amd would need to rive up production costs which reduces their profit/value. but since intel already has a high production cost, such methods would not affect them?
tommybobberson - in a manner of speaking, yes. How much it costs to produce effects margins and pricing. One reason AMD could price the high core chips in such a manner, aside from effective yields for products, is the decreased costs to produce high core count products. If AMD added the active interposer now, it would rise production costs to be the same as a monolithic die, which would then require either an increase in market price of the products or a decrease in margins for their products.
Intel, on the other hand, never captured the extra margin of going to chiplets before introducing the stacked chips. That means that the active interposer cost to them is less as the price to do so matches the cost of a monolithic die. As such, Intel can maintain their current pricing while introducing the stacked die on active interposer. Further, due to filling up 22nm fab time (because of 10nm causing a shortage of 14nm fab time which required Intel to move certain lines back to 22nm, likely forcing some plants planned to be closed or retrofitted to remain on 22nm), Intel will be able to help on the cost of 10nm delay, thereby mitigating some of the effects found in not having 10nm up and running by now.
Fist they mock your chiplets by saying you glue your dies together, then they fail to compete with you and then years after they copy your chiplets and call it revolutionary. This is why Intel can no longer have public presentations of their own failures and have to have privately controlled presentations.
There's nothing whatsoever revolutionary about "chiplets". A 10 year old core 2 quad used exactly the same technology that AMD calls "chiplets" in 2019 and AMD fantards like you even insulted the Core 2 quad for doing it.
Maybe you should actually read the article about what a active interposer can do vs. wiring standard hunks of silicon to a PCB in literally the same way it was done in the 1970s before you run around acting like AMD is the only company to ever innovate anything.
I've been reading articles about Intel 10nm promises for years now. And then we got failed laptop chips and low power pc boxes to appease 'mass production' status and not get sued by investors for false claims. Forgive me if I abstain my drooling until Intel actually delivers something that does not require industrial chillers. BTY where is that 28 core HEDT chips anyway ?
There is always a point where the WHEN something is used sets a new direction. Multi-CPU in a workstation/server moving to a single processor with multiple cores was a big shift. Moving from two cores linked together when cache coherency was a big problem to a single dual-core without an interposer was a better way to go. It all comes down to if there is a performance boost or degradation as a result of the technology/implementation.
With that said, a single CPU with 64 cores is fairly significant, and keeping the performance from being horrible with that many cores is the reason AMD has been praised. Price/performance for the server market and such.
For a long time, Intel was seen as the king when it came to clock speeds and performance, but Intel hasn't had a significant boost to IPC in over three years. Intel has also been promising 10nm for three years, and still no sign of it, with the promise of holidays 2019.
So, Intel still has nothing, they have vague promises of ways they will improve performance, but it remains to be seen if the performance will actually be better if 10nm slips again. On the flip side, AMD clearly has significant performance boosts from Ryzen 3rd generation in 2019(March/April being when many expect it). 7nm from AMD isn't a, "will they?" question, it isn't even a "when?", with CES in one month and with it, the answers. IPC improvements due to design improvements not related to chiplets at all would be good, as well as higher clock speeds. So, there is a potential for 30+ percent higher performance in one generation.
Yes, I don't expect AMD to deliver huge performance jumps again for years, but we may see things such as Gen-Z support, going beyond two memory channels for the mainstream Ryzen chips when the next socket comes out in 2020/2021, and other things that may boost system/platform performance while AMD figures out how to get more CPU performance.
Intel is still trying to do things the same way, just faster. Faster CPU, faster links to individual devices, fabric on a system level will be Intel trying to reinvent what AMD has been working toward.
I will also note again that some things are not always about being new, but are more about presentation and implementation. Palm really popularized the idea of apps that users could install on a small portable device(PDA), but Apple popularized it with the iPhone. In some cases, the implementation really is good, and will get the respect of the industry, in other cases, you see that something is clearly a case of following the lead of another player.
So, in the PC industry, is Intel leading the way with innovations, or is AMD in the drivers seat?
Using a term like "fantard" straight-up devalues your argument, but the blatantly false statement about the C2Q using "exactly the same technology" seals the deal.
Chiplets refers to the CPU being divided into multiple sections (cores and un-core) on a single package using dedicated interconnects. It's not at all the same technology as having two discrete CPUs joined by the FSB on a single package. Both are novel approaches to particular problems, although the C2Q (and Pentium D before it) were criticized for their inefficiency by using the FSB for inter-core communication. We don't know how "chiplets" will pan out yet, so the jury's out.
Bash the fans for talking nonsense all you want, but maybe don't sink to their level.
The electric car was pioneered more than a hundred years. It's one thing to pioneer something and a whole different thing to actually develop it into something that is affordable to millions and drags the whole industry forward.
If you think pioneering is all there is to it I have hundreds of grapehene battery designs you should invest you narrow-minded-driven life savings into.
Would that be Marvell's chip from 3 years ago that suddenly people are talking about... which is just a chip on an interposer with an HBM chip. HBM which AMD co-developed, had prototype APU packages using it in 2011 and launched Fury X with HBM on an interposer 3 years ago.
Right, but AMD didn't pioneer it, because Marvell made a dramatically less complex switch using the same concept at the same time AMD did, but you know, AMD co-developed the memory, they worked with the packaging plants for years to be able to mass produce interposer packaged products and Fury X was the reason HBM went into full production in the first place.... but sure, AMD definitely didn't pioneer this latest move.
Well ... yeah. Intel had multi-chip CPUs almost 15 years ago (look for Pentium D). And it was mocked by AMD and its fans. I thought that the mocking intel did last month was just to remind AMD, what they said then and I found it hilarious. But maybe not. People don't change and it's completely plausible, that it was said by someone, who does't know the history.
Never Mocked by AMD. They just made sure to point out that their 2 cores with the X2 and quad with the Phenom where "real" multicore chips. It mattered more back then because Intel's boards had FSB's instead of direct connections and the separate chips had to talk through the FSB and chipset northbridge to talk to each other. Also AMD hadn't been rambling on thier glue tech for years (with no product yet on market using it (Kaby-G isn't actual EMIB)) when they made their statements.
AMD promoted their products said they are better because they were real. There is a negative connotation with that. But that is different then calling out your competitor for using "glue".
For years Sohail Ahmed was the roadblock to Intel doing any creative package technology by blocking silicon support. He finally was shown for what he was after 2+ year process technology delays and re-invented this. Good riddance Sohail
How do the Intel process delays influence their microarchitecture plans? 10 nm was already supposed to be here and with it its new microarchitecture, Ice Lake. Does the Ice Lake design continue to get improved as the delays kept happening or was it finalized years ago? Why? What about the microarchutectures succeeding Ice Lake?
LOL you went in full fanboy mode with the 144mm package being small. The package is that size for PoP, the die is much much smaller, they can fit way more 10nm cores than that in such a large area, even without a base die. Anyway, there are no relevant details on Foveros and that's problematic. The first question is cost, then you would want to know details about pitch and so on. Intel is by no means the first to announce such a solution so the details are what matters. This is how you get to a 3D monolithic die in some years so any foundry that wants to stay in the most advanced node game, needs to push the packaging roadmap.
And you acted like someone clueless that has put all his savings in Intel's stock based on brad, "Large revenue stream" LOL, you don't know anything about the product, the target market, about the packaging solution but you get all wet.
Anyway, took a look at the slides and they do actually provide some relevant details that yuo seem to not notice. They say face-face, bump pitch at 36um and bump density at 828.mm2.
That's a nice fan on their fanless SoC design. It seems like Intel is really playing catch up with their ultra mobile designs (with a power envelope that's likely still above phones) fabbed on a late 10nm design and only now doing things like heterogeneous cores and PoP memory that the AX and Snapdragons have been doing for a few years now.
I wonder what portion of that block diagram's compute reside on the P1222 (which looks bigger die space wise on their diagram than the P1274 but may pack fewer transistors.) They mentioned IO but it wouldn't make too much performance sense for caches and coherency to segregate the Big/Little cores onto separate dies, so it likely contains the ISPs and memory controllers. Where the GPU is located is another interesting question and no LTE modem can be seen on the block diagram. This is also likely to have higher packaging cost and worse integration than the AX, Kirins, PXX's or Snapdragons.
Dev boards have fans because they don't have a chassis to sink heat into. If it's like every other dev board I've used, it is probably a 8k RPM screamer, too.
Seems like you failed to read page 4: "These were development systems with these funky looking heatsinks and loud fans to ensure there was no thermal throttling."
My comment was in reference to their new Atom lineup which are supposed to be fanless, not Sunny Cove which operates in a much higher power envelope. Surely they could use a larger passive cooling unit to demonstrate what's supposed to be a key feature of that product; my other point is that Intel's aiming for something later next year that seems much less advanced than what Apple, Huawei, Mediatek, and Qualcomm already have in production today.
If that's what you took from that part of the presentation then I feel you missed the point. What Intel are doing in terms of chip layout is something none of the companies you mentioned are able to do yet, which is the fundamental point here - whether it results in a useful product is another matter entirely, but it's still fascinating and not same-same.
To back up the other people who responded to you, again, the fan is irrelevant. It's a dev board.
Your point is taken and Keller did say it was in its infancy, but I am interested in whether what we're seeing here will be a competitive product or will remain an interesting science experiment. There are theoretical benefits of stacking high performance dies on low leakage ones like this but also substantial challenges and deficiencies which the current iteration doesn't show that it has overcome. What we might see in benefit in terms of better overall area, lower package level fab rejection rates, and better net power characteristic could be offset by a worse concentration of heat and hence more throttling when both elements are running or more expensive packaging. Perhaps in the end, a monolithic die is a better compromise despite losing out on some metrics for mobile.
So the GPU is going to be called ... Ten to the Eeth power? Is that right?
I reject all these Xes used in unpredictable ways. The iPhones are pronounced exar and excess. This is ecksee, and I still use oh ess ecks on my emm bee eh at home.
>Intel actually says that the reason why this product came about is because a customer >asked for a product of about this performance but with a 2 mW standby power state.
Huh wonder who the customer for that Core/Atom hybrid is. Seems a bit overpowered for a tablet. A bit underpowered for a MacBook (or for a car). Chromebooks maybe but most are too low volume to demand a custom part (maybe the education market is taking off?). PC OEMs don't normally take such custom parts for their laptops. But the graphics loadout implies some kind of PC-type application?
From the diagram, it appears that hybrid cpu - has single Core CPU with 4 small (Atom) CPU's - such technology is done with Samsung Processors - this would mean it still lower power - but still have primary single thread core speed.
Most interesting would be how the smaller core are used in scheduling system. Most like means and enhancement in OS for proper usage.
Microsoft Surface, obviously. It's become a very profitable line for MS but the current models are either too battery-hungry (Core CPUs) or too slow (Atom CPUs). Fovoros will give the best of both these worlds while also being x86... priced right, a Fovoros-based Surface will essentially end any argument for iPads in a business environment, especially considering most software remains firmly single-threaded. But it remains to be seen whether (a) Intel can get the power down even further (7W is still double most smartphones) and (b) whether their big.LITTLE implementation is good enough.
Windows on ARM will do just fine now that Visual Studio emits ARM native code. Once Chrome gets ported (and that will be soon https://www.neowin.net/news/both-chromium-and-fire... the platform should address 95% of typical daily use cases and provide substantial compatibility with legacy software / file formats. This is better value than iPads and upcoming dedicated SoCs like the 8cx should offer better performance and battery / heat characteristics than what Intel has planned for next year in the same power envelope.
I think you missed the part where Windows on ARM is horribly slow and therefore shitty. As a result, Microsoft has no plans to port anything useful (e.g. Office) to ARM, which means Windows on ARM is stuck being the lowest of the low-end. And that's not a space that Surface is intended to play in; Surface is an iPad competitor, and an iPad competitor can't be slow and shitty. Business devices can't be slow and shitty, and they absolutely need to be able to run Office.
I expect that either Windows on ARM will be allowed to wither and die once Fovoros ships, or it will languish in a dead zone whereby only the cheapest of the cheap devices by no-name-brand OEMs (think $100 Lenovo tablets) use ARM chips and hence need it.
So unless Qualcomm's 8cx is a game-changer in terms of performance, Fovoros should be the end of ARM on desktop, and thank fucking God for that.
Microsoft already have an Office code base on ARM, so I'm not sure what you're talking about there.
What would worry me about an Intel BIG.little style design is that if Windows doesn't assign your performance-critical application to the correct (big) core, performance will mostly suck just as hard as if all your cores were Atom.
As such, I'd be cautious on calling a winner just yet.
Agreed with this; Microsoft has been let down by Intel not having a good mobile platform. If it were up to them, they wouldn't bother with ARM, but they have to due to battery/power/heat requirements.
For ultra-mobile, not only are battery/power/heat issues but supply is one as well due to Intel being locked down to their own manufacturing division. On top of that, they have a lock on x86 by not licensing to any competitors but AMD, who despite competitive stretches inevitably stumbles (either due to themselves or Intel's non-engineering financial efforts) and leaves the industry with dry spells of performance improvements. Intel's gross margins on their chips remain >60% as a result whereas ARM SoCs even after licensing is closer to 20-30%.
Keller declared that the technology is in its infancy, and feature wise the 2019 version of the Atom simply won't be competitive with leading ARM SoCs like the 8cx. The slowness you refer to only occurs when running native 32 bit x86 code on the WOW emulation layer, but the value of this feature is mostly in the compatibility being there at all. If performance and compatibility of legacy code matters to you then certainly Windows on Arm isn't suitable. However, it will matter even less now with the new native compilation tools and ports of important sub platforms like Chromium.
I’m betting Apple wanted one for MacBook Air, or maybe MS for Surface Go. It would be the right amount of performance for both devices, an both companies would have the clout to get it done. I’d lean toward Apple because the GPU is pretty big.
Typo: "a physical address space up to 52 bits. This means, according to Intel, that the server processors could theoretically support 4 TB of memory per socket."
That should be petabytes instead of terabytes. The limit is for an entire system, not per socket as additional sockets will not grant any additional capacity.
It sounds like Intel has been working on increasing performance in two ways 1. 7nm change for the future - because of limitations found with 10nm 2. 10nm enhance for corrections for performance of issues with Cannon Lake
But most importantly, architexture improvements like faster single thread execution and new instructions and multi-core improvements will in long term significantly improve performance
Forgive me if I take 2 metric tons of salt with any roadmaps Intel provides these days. They haven't exactly had the most accurate timelines as of late (past four years)
It appears they realize that and coming out with document to indicated they have made corrections - this is better than not knowing what they are planning - or as some AMD Fans would like to believe that they lost the battle.
Is it better? Their last roadmaps were not worth the powerpoint slides they showed up in, not to mention the whole "tick-tock-optimise-optimise-delay" fiasco.
From the look of things in this excellent article - it looks for 2019 Intel is combining both tick and tock together with significant architexture improvement along with process improvements.
Compared to the latest Xeon roadmaps I have seen in NDA meetings, these desktop roadmaps still seem quite ambitious. They don't expect to ship a "lower core count" 10nm Xeon before mid-2020.
Just because Intel did not mention it - does not mean it will not happen.
Also remember that Intel is decoupling the process from actual Architexture. In the past, I alway remember the Xeon technologies were forerunner's of base core technology. Hyperthreading is one example and multiple core support.
Its 6am for me, and with the mugshots of Jim Keller and Raja Koduri at the end you could have labelled this the AMD architecture day and I would have believed you. It will be an interesting several years as those two put their stamp on Intel CPU/GPUs.
The just announced Qualcomm Snapdragon 855 has a peak TDP of 5 Watts. Most smartphone manufacturers limit the whole SoC to 4 watts. The average smartphone battery now has >10 Wh, so even at full load the device would still run between 1.5 (display on) and 3 (display off) hours. Which it has to in the hands of those gamer kids.
It's exciting to see Intel use FreeSync in their graphics. They could have easily gone with some proprietary solution, then we'd have three competing monitor types. Hopefully having both AMD and Intel on FreeSync will prompt Nvidia to at least support it alongside G-Sync.
They committed to Adaptive Sync back with Skylake, but it's taken this long to see it because they haven't released a new GPU design since then. It would have been a *very* weird move to suddenly release their own tech.
I think it's more likely NVIDIA just waits for HDMI 2.1, which supports VRR as part of the specification.
I also suspect HDMI 2.1 will eventually kill of Displayport entirely; Now that HDMI offers more bandwidth, and given Displayport is a non-factor in the consumer (TV) market, there really isn't a compelling reason for it to continue to exist alongside HDMI. We *really* don't need competing digital video connector standards, and HDMI isn't going anywhere.
HDMI is fantastic for AV, but has NO PLACE WHATSOEVER for desktop monitors. It causes a multitude of problems due to abusing a standard intended for very specific combinations of resolutions and refresh rates (and a completely different colour range and colour space standards), add offers zero benefits. Get it the hell off the back of my GPU where it wastes space that could be occupied by a far more useful DP++ connector
Setting all else aside - DP is "better" because the plugs lock IMO. HDMI and mini-DP both have no retention system and that makes it something I do my best to avoid both personally and professionally, love the "my monitor doesn't work" calls when it's just you moved your dock and it wiggled the mini-DP connector.
7nm != 7nm in this case; in fact, Intel's 10nm process looks to be just as dense as TSMC's 7nm. I think the question is more about how quickly TSMC/GF/Samsung can offer a 5nm process, because I wouldn't expect a manufacturing lead anytime soon (assuming 10nm processors come out on time).
The nodes are marketing jargon. Intel's 10nm=TSMC 7nm for intents and purposes. Intel's 7nm=TSMC5n/3nm, approximately. TSMC is doing volume 5nm EUV next year, IIRC, for Apple during H2, while working on 7nm EUV for AMD (or something like that) with 5nm being offered in 2020 products alongside 7nm EUV. Intel's current info shows 7nm for 2021 with EUV, but that is about the time that TSMC is going to get 3nm, alongside Samsung which is keeping up on process roughly alongside TSMC. Intel will never again have a lead like they had. They bet on EUV and partners couldn't deliver, then they just kept doing Skylake refreshes instead of porting designs back to 14nm like the one engineer said he told them to do and Intel didn't listen.
I see nothing ground breaking from Intel unless they can solve the Cobalt issues, as due to the resistances at the size of the connections at the smaller nodes, Cobalt is a necessity. TSMC is waiting to deal with Cobalt, same with Samsung, while Intel uses that and Ruthenium. Meanwhile, Intel waited so long on EUV to be ready, they gave up waiting and instead are waiting for that to mature while TSMC and Samsung are pushing ahead with it, even with the known mask issues and pellicles not being ready. The race is fierce, but unless someone falters or TSMC and Samsung can't figure out Cobalt or other III-V materials when Intel cracks the code, no one will have a clear lead by years moving forward. And use of an active interposer doesn't guarantee a clear lead, as others have the tech (including AMD) but have chosen not to use it on cost basis to date. Intel had to push chipsets back onto 22nm plants that were going to be shut down. Now that they cannot be shut down, keeping them full to justify the expense is key, and 22nm active interposers on processes that have been around the better part of the last decade (high yield, low costs due to maturity) is a good way to achieve that goal. In fact, producing at 32nm and below, in AMD's cost analysis, shows that the price is the same as doing a monolithic die. That means, since Intel never got a taste of chiplets giving better margins with an MCM, Intel won't feel a hit by going straight for the active interposer, as the cost is going to be roughly what their monolithic dies cost.
Exactly - it reminds me the frequence wars back in P4 days. But if you look closely at Intel's plan - I am no chip designer - even though I did take Micro-code Enginnering classes in College, but Foveros is revolutionary design - I thought EMiB was amazing, but to do that in 3rd diminsion is awesome - maybe one they could even stack cores that way - instead huge chip monsters.
But a nm rating by vendor 1 does not nm rating by vendor 2 - what underneath makes the different - Intel is extremely smart to decouple nm process from actual archexture. If you notice by Intel archiexture Intel has more improvements in core archiexture over next 3 years - this is because they are not limited by process (nm)
EMIB was not revolutionary and neither is foveros. They are incremental steps and existing competing solutions are available and have been for some time. Not only that, it will only be used on select products with eventual spread to the stack.
Go to the second page of comments and see my links there. I think you will find those quite interesting. Not only that, this has been done with HBM for years now. If you look at AMD's research, almost half a decade ago, they were studying optimal topologies for active interposers. They found only 1-10% of the area was needed for the logic routing of an active interposer. Moving a couple I/O items onto the active interposer just is an extension. In fact, you can put those components on a spread out interposer between the above chiplets that sit on the interposer, but would need to plan on the heat dissipation or having so low a heat that it doesn't need sinked.
Considering lack of details of what is on the active interposer or timeline for mainstream, HEDT, and server markets, I will assume those won't see this until 2020, with the first products being mobile in nature.
In fact, Intel this summer gave AIB patents to DARPA to try to control what tech is used for chiplets moving forward, proposing that be used. AMD proposed a routing logic protocol which would be agnostic to routing on the chiplets itself, increasing compatibility moving forward.
Now, if EMIB is so "revolutionary", do the Intel with AMD GPUs seem revolutionary? Because that is the only product that comes to mind that uses it. Those chips are Hyades Canyon and Crimson Canyon. It isn't that dissimilar to other data fabric uses.
So far, on disintegration of chip components, AMD's Epyc 2 is getting there. It literally uses just cores and the interconnect for the chiplet (for this description, I am including cache with the cores, but when latency is reduced with active interposers, I do expect an L3 or L4 or higher caches or integrated memory on package to be introduced external to the "core" chiplet moving forward). From there, we could see the I/O elements further subdivided, we could see GPU, modems, etc. But all of this has been planned since the 2000s, so I don't see anything new other than the culmination around the same time other alternative solutions are being offered, just that the cost/benefit analysis has not tipped in its favor just yet, but should in the next year or so, which should bring many more designs to the forefront. Here is a presentation slideshow discussing the state of current 2.5D and 3D packaging. After review, I'd like to hear if you still think EMIB and Foveros are "revolutionary." Don't get me wrong, they are an incremental success and should be honored as such. But revolutionary is too strong a word for incremental process. Overall, it changes nothing and is the culmination of a lot of work over a decade by numerous companies and engineers. Even competing solutions can act as inspiration for another company moving forward and Intel's engineers read the whitepapers and published peer reviewed articles on the cutting edge, just like everyone else in the industry.
As to you saying Intel is smart to do it, they haven't done it except in silicon in labs and in papers, unless talking the EMIB with Intel CPU. AMD has a product line, Epyc 2, where the I/O is made at GF on 14nm and the chiplet is made on 7nm TSMC with greater pitch disparity. Intel hasn't really removed the components off the core chip yet into each separate element. ARM is considering something similar, and this is the logical progression for the custom designed chips for RISC V moving forward (may take a little longer, less well funded).
Meanwhile, this doesn't seem to stack high performance cores on high performance cores. The problem of thermals cooking the chip are too great to solve at this moment, which is why low power components are being placed relative to the higher performance (read as higher heat producing) components. Nothing wrong with that, it makes sense.
But, what doesn't make sense is your flowering lavish praise on Intel for something that doesn't seem all that extraordinary in light of the industry as a whole.
People keep saying the same thing about Intel's 14nm process, which is allegedly equal or better than other fab's 10nm processes. But AMD currently makes products on 14nm and 12 nm processes which Intel apparently can't build on its own 14nm process. For example there is still no 32-core Xeon while AMD will soon ship 64 core EPYCs and lots of other companies have 32/48/64 core designs on the market. Many Intel CPUs have much higher effective TDPs than their equivalent AMD CPUs.
So pardon me if I am not willing to simply believe in all this "Intel's process is better in the end" talk.
But intel's single core performance is better than AMD's single core performance. Just because AMD glues 8 core cpus together does not make them better
Gen 11 graphics in desktops is said to reach double the performance of gen 9.5 desktop graphics. 2W Atoms have half the max frequency of desktop graphics and half or three quarters of the execution units. The 7W custom hybrid processor has the full amount of execution units. I'd guess it has half the frequency of it's desktop counterpart to stay within power limits. This would put it at the same performance as 9.5-gen desktop parts, or actually at 30% higher performance.
Think about that. 80% single thread performance compared to current high-end desktop processors (my quick est.) and 130% graphics performance. That's a solid notebook for web browsing, legacy programs and even for light gaming. All that at a power budget of a tablet.
Keep in mind that 3DXpoint came to market three years past the initial promise from Intel. 10nm will be appearing 4 or 5 years late depending on when volume production materializes.
Chances are that this 3D stacked promise for late 2019 will show up around 2022.
I'm seeing alot of fellow Intel fanboys show a semblance of confidence that has been absent in recent months and rightfully so.
Let's all hope Intel can deliver this time on time.
I disagree on worrying about Intel with the active interposer. They use passive interposers for the mesh on HEDT and Xeons and Xeon Phi (since around 2014) for years now. The 22nm active interposer is to fill out fab time due to pushing chipsets back to plants that were going to be shut down due to moving to 10nm, which never came.
Meanwhile, AMD did a 2017 cost analysis saying that below 32nm would cost as much as a monolithic die, so it seems they are waiting due to cost, not on technical capability.
Either way, Intel doesn't hit 7nm until 2021, around the time 3nm may be ready at TSMC, if they go to 3nm within a year of volume 5nm products expected in 2020. That means Intel will never regain the process lead moving forward in any significant way, unless everyone else gets stuck on cobalt integration.
"We have a new method inside the company to decouple IP from the process technology. You must remember that customers buy the product, not a transistor family. It’s the same transformation AMD had to go through to change the design methodology when they were struggling."
...doesn't that basically mean they're going fabless, or are at least going to develop a design for multiple processes so they can also use other fabs as well? Not that I'm disagreeing with that! If I was Intel I would have started doing so years ago, when everybody else was starting to do it.
I think this just means fewer hand placed/routed custom logic blocks, and more synthesis/APR. If you look at most other dies these days, you see a sea of gates. When you look at intel dies, in the cpu cores you see very tight, regular logic blocks. Every mention of "abstraction" in the Q&A screamed synthesis/APR. This may make it possible for them to port to other foundries if they wanted to, but I doubt they would.
An interesting question is whether this has any implications to power/timing/area optimization vs. the current approach, as a lot of their ability to push very high clocks might come from how much more custom logic design goes into their implementation.
10nm is already stated for Sunny Cove in 2019, so it likely 7nm - but keep in mind the process (nm) is decouple for process - so it could be 10nm or 7nm
Ian, the active interposer isn't new, and I am wanting o know more exactly what has been moved to the active interposer. AMD's whitepapers on the topic, using routers, etc., on an active interposer, was published in 2014 and a follow up on 2015. In late 2017, AMD published a white paper dealing with the costs of doing so, where producing an active interposer on smaller than 40nm resulted in costs being the same as producing a monolithic die. AdoredTV did two videos on this topic months ago, as a matter of fact. So, unless they are sticking some of the features that AMD put on the I/O die onto the active interposer, this is doing what AMD plans to do but chose not to on the basis of cost. Check out these papers and videos:
Intel seems to now be using ideas from everywhere else in the industry, while also using 22nm fabs that would have been slated for decommission if not for the 10nm fiasco that is their process, which they had to push certain chipsets to due to the 14nm shortage, meaning they need to keep the fab time full to justify them keeping the lights on and a 22nm active interposer fits the bill. The article practically writes itself.
Thunderbolt is dead in mainstream PCs at this point, because there's no use-case in which it outperforms USB by enough to justify its cost (both of implementation and in the peripherals that people actually want to use). It's become another almost-Mac-exclusive like Firewire, and will share the same fate.
The number of those sold can be counted on one hand. Ok, maybe two,
In all my years I've used a FW cable exactly once.
I've seen a total of two thunderbolt cables, our Mac guy had them for some crap he was doing. IOW, dead just like FW. Proprietary stuff just doesn't tend to last long in an open, commodity, market when equivalents are available for free or pennies on the dollar.
Nonsense, it's exactly the other way around. Everything is converging on PCI Express. Every peripheral which can actually profit from USB3 speeds would be better off with a direct PCI Express connection instead, and Thunderbolt can do that. USB3 on the other hand requires protocol conversion on both ends of the cable. Instead of being able to just plug a normal GPU, an off-the-shelf NVMe SSD or any other standard device into an external enclosure, without any hardware or driver adaptions, people have to come up with USB3 interface chips and special USB variants of every device.
I guess you have never truly experience Thunderbolt - it has 40G rate compare USB 3.0 and currently drives a doc - powering multiple monitors plus multiple devices - also remember that it is a superset of USB C Gen 2.
“a customer asked for a product of about this performance but with a 2 mW standby power state. In order to do this, Intel created and enhanced a number of technologies inside the company. The final product is apparently ideal for the customer, however the chip will also be made available for other OEMs.”
Is there a reason when I expand the text screen your video Ads pop up onto the screen, refuse to quit and obscure 1/4 of the display? Are your readers displeasure less important than advertising dollars? I don’t read your articles to buy things, I have Amazon for that, I read to become more informed. Kindly let me know why your pages have become Tomshardware.com lookalikes...
advertising dollars keep the lights on and pay the salaries of the authors of these articles. Without ads, you don't get to read the content for pleasure, and then it all is a moot point, now isn't it. With that said, could you give constructive criticism on how they could make it less intrusive while still accomplishing the same goal?
I cannot read a thing when the text is partially blocked by a video Ad. I will simply read elsewhere. They need add dollars and that is more important than my reading their article, obviously. Thanks for your input.
"Without ads, you don't get to read the content for pleasure, and then it all is a moot point, now isn't it."
the 'advert business model' is a crock and scam. always has been. at least in print media, everyone who 'clicks' on the newspaper or magazine has to pass over, at least, the advert. with the innterTubes, the adverts not only steal more and more of the screen, but definitely slow down the process.
and here's what the proponents of the advert business model keep denying: those that block adverts won't interact with them at all. neither the page owner nor the advert buyer actually lose a 'real' customer. the page owners are scamming the advert buyers with 'eyeballs' metric. it means nothing but the buyers are too stupid, or scared, to insist on metrics that measure actual $$$ they earn from clicks. the page owners won't do that because it will reveal that innterTube adverts are of negative infinity value.
I wonder how much longer the 'if we build it, they will buy' mantra works? do all those cube dwellers in MegaCorpUSA play high requirement games all day long?
IP = Intellectual Property. This covers Patents, Copyright, Trademarks, and potentially trade secrets (this is things like processes, etc., that they cannot receive other protections for or that they choose not to patent because there is a time clock after which it falls into public domain, so keeping it secret can potentially allow for longer protection periods).
THIS IS NOT LEGAL ADVICE. IF YOU NEED LEGAL ADVICE, PLEASE SEEK THE ADVICE OF COUNSEL WITHIN YOUR JURISDICTION. THIS IS GENERAL LEGAL INFORMATION, IS PRESENTED AS IS AND WITH ALL FAULTS.
The slide is just marked as under embargo until the contractual embargo of information is lifted. During the presentation, there was an embargo that lifted like hours or a day after the conference. Nothing to really be concerned about.
"although it was pointed out that these improvements won’t help everyone, and might require new algorithms in order to use specific parts of the core."
Which means it will help almost no one, as very few will optimize specifically for that core.
"We’re waiting to see what changes Intel has made on the front-end, which is where a lot of low-hanging fruit often lies for performance."
Low-hanging fruit in x86 was picked up in Pentium. Since then it is just more and more kludges which cost more energy than improve performance (normalizing for node).
"The 7-Zip demo was relatively straight forward, showing how the new instructions such as Vector-AES and SHA-NI in Sunny Cove can give the processor a 75% boost in performance over an equivalent Skylake based platform at iso-frequency."
Huh? Have they recompiled (what compiler supports the new instructions then), or manually wrote a codepath in asm? And enabled encryption so to get any increase, so the increase is not actually for compression? Have they disabled compression too? ;)
Really Intel? Adding AI improvements to Core architecture in 2021? Smartphone vendors were doing it last year... way too late. And 5G will take off in the end of 2019.
I guess I'm not getting why I should be impressed by this. https://www.electronicsweekly.com/news/design/comm... Leti already did it? They say it's IP can be used by others, so is this Intel's solution (what they're using I mean)?
AMD already does chiplets, everyone does socs (Intel failed them)...etc. 144mm^2 not that small (about an large apple soc size). Current 7nm A12 is 83mm^2 with 6.9B transistors and two big cores, 4 small. AMD already did interposer/chiplets. Memory has been stacking for a while now. Not sure what is supposed to impress me here.
"Very much like a mobile chip" ...Pretty much...Again, why so impressed?
And as OP noted, you have no idea how big the market is, nor how much they can make on them. I think they have to try to sell some before we can say that (many Intel things killed over the years), as their last mobile strategy cost them 16B+ in giveaways, and lost the fab race for a while (maybe forever, because that 16B lost should have went DIRECTLY into fabs and 10nm wouldn't be crap now), as once 7nm Intel hits, it looks like TSMC beats them anyway with 5nm (ok, tie? whatever). My point here is Intel's 7nm won't be much ahead of tsmc 5nm if at all as that is what it will compete with since tapeouts happen q2 2019 and chips 12-15 months later. https://www.extremetech.com/computing/278742-tsmc-... Many other articles out there like this, but has a good chart of when and how much wafers etc. But if risk production is really as they say, 5nm chips by xmas 2020. That puts Intel where with this @7nm? Unless that answer is XMAS 2020, I'm thinking behind tsmc. It looks like TSMC is aiming before xmas and they've been moving at a good clip without many glitches recently, so Intel better get busy IMHO. TSMC is 2q 2019 risk, or 2H 2019 depending on who you believe I guess. But still, Intel 7nm better hit by xmas 2020 then right?
Comments on last page: Uh, should have bought NV under $10 but couldn't take the best from gpu side because nobody could handle Jen as president :) WOW, look at that value you passed up Intel, oh, and you'd RULE mobile by now with all those tegras being on Intel's process 5+yrs ago (never mind what gpus would have done on Intel during this time) and you already had the modem solution too (NV bought one, and had to kill it, intel would have taken over everything cpu/gpu/modem/mobile).
With chromebooks, 2b mobile units not using NV gpu's etc, nobody would have stopped them at FTC since more gpus, and arguably more computing devices ship without WINTEL, Intel's gpus (even with NV in there) etc. Intel gpus wouldn't have been needed, mobile wouldn't have been lost (14nm Intel NV socs would have competed well against 20nm everyone else, same story before 14/20, Intel 22nm NV socs vs. 28nm everyone else)., fab money wouldn't have been blown on mobile etc etc. All the problem Intel has now are because they blew 16B on failing instead of BUYING NV for that or a bit more. They had a value back then ~6B or less 659mil shares at $10, I bought at 12...ROFL. They should have owned NV anywhere in there and all this crap wouldn't have happened...LOL. We'll see how this "ideas from outside" crap works out now. To be fair AMD had the same problems to some extent, firing Dirk for not liking mobile/tablet/apu, and wanting a KING first then that cheap crap later. Now they chase king cpu (not gpu yet so far) again...LOL. Yeah, I own AMD stock but still think management is dumb. Can't price anything right, always trying to be a friend or get share which means NOTHING if it doesn't come with MARGIN as a poor man. Sure the rich guy can flood a market, kill enemy sales, but only because he has wads of cash and can wait until he breaks you. Poor company needs NET INCOME for the next gen R&D and to retain people like KELLER etc.
I'm only in AMD stock for the 7nm server stuff, then out likely. Rumor/hype work well in advance of real product at amd (talking stock price here), so you don't likely have to wait for anything other then "shipping soon" or some leaked benchmarks etc. and the price will head to 40+ probably. Just run before that reality hits or brave the waves...LOL. I think AMD will make money, certainly has the server chips to do it, but management just seems to fail at pricing anything to take advantage while they can. Too worried about market, instead of MARGIN for R&D. I'd rather own the 10% that makes most of the money than the 80% that makes crap+a little midrange crap. Apple thinks the same, see their Q reports for ages etc. Own the rich so you can afford to supply the poor. It doesn't work the other way around generally speaking, especially as the little guy. You can't bleed as the poor little guy ;)
One more point, in case anyone brings it up, A12x 122mm^2 10B transistors. just adds two more big cores IIRC (maybe a few other small changes). Same point though.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
148 Comments
Back to Article
nathanddrews - Wednesday, December 12, 2018 - link
I know the meme about gaming on Intel graphics, but if they implement Adaptive Sync *combined* with some sort of low framerate compensation, it would make gaming on Intel IGP much less hilarious. Can Intel license FreeSync without using AMD GPU inside? I know FreeSync worked on KLG, but that had an AMD GPU.RarG123 - Wednesday, December 12, 2018 - link
Like many of AMD's things, FS's an open standard and royalty free. Anyone can use it.Ryan Smith - Wednesday, December 12, 2018 - link
More specifically, Freesync 1 just AMD's implementation of DisplayPort Adaptive Sync. Intel has to build their own implementation in their display controller and driver stack, but past that all the signaling aspects to the monitor are standardized.Topweasel - Wednesday, December 12, 2018 - link
Ryan, I thought that was reversed, that AMD worked on adding Adaptive Sync into the specs and worked on making sure it's implementation matched what they were doing with Freesync.kpb321 - Wednesday, December 12, 2018 - link
IIRC it's a bit of both. Adaptive Sync was present in the eDP standard for things like laptop monitors or tablets as a power saving feature. AMD brought this to the desktop side of things to use for variable framerates in games and helped the standard bring it over too.edzieba - Wednesday, December 12, 2018 - link
'Adaptive Sync' is effectively the eDP Panel Self Refresh ported over to the full DP spec.drunkenmaster - Wednesday, December 12, 2018 - link
Freesync utilises adaptive sync. Adaptive Sync is the technology on the screen side, a screen must support adaptive sync to be used by Freesync. Freesync is just the AMD side of it. If a adaptive sync capable screen is detected you can turn on freesync in drivers. Adaptive sync was a standard written up and proposed by AMD and given to I forget who it is now, Displayport group direct or to Vesa. They accepted it and implemented it pretty quickly but as with all things standards take a long time for get integrated into the next cycle or two of products.Anyone can use Adaptive sync panels, no one but AMD can use freesync as it's something specific to their hardware and drivers. intel will produce their own specific driver/implementation and just connected to adaptive sync panels in the same way.
porcupineLTD - Wednesday, December 12, 2018 - link
So Intel is going straight to chiplets on interposer, it will be interesting to see if AMD adopts this with Zen 3 or waits until Zen 4. Anyway its nice to see competition doing its job.Alexvrb - Wednesday, December 12, 2018 - link
We don't know yet exactly how much logic Intel is moving to the interposer. It looks awesome for mobile form factors! I think they will face some challenges to bring it to high-TDP desktop solutions, though.ajc9988 - Wednesday, December 12, 2018 - link
http://www.eecg.toronto.edu/~enright/micro14-inter... http://www.eecg.toronto.edu/~enright/Kannan_MICRO4... https://youtu.be/G3kGSbWFig4 https://seal.ece.ucsb.edu/sites/seal.ece.ucsb.edu/... https://www.youtube.com/watch?v=d3RVwLa3EmM&t=...porcupineLTD - Thursday, December 13, 2018 - link
Can you articulate what you want to say with those links? or how they might contradict me/answer my question?ajc9988 - Thursday, December 13, 2018 - link
What I am saying with those links is that AMD has already evaluated the use of active interposers and is the company that is articulating a data protocol for active interposer routing so that it doesn't matter what each chiplet does on its own data routing, so that any chiplet could be attached to an active interposer and work. Further, in the 2017 white paper, it directly shows the reason that AMD hasn't yet adopted the active interposer is due to costs. Passive interposers can be used, but don't offer the benefit of active interposers. But, the cost of an active interposer is the same as doing a monolithic chip once the active interposer is produced on the 32nm or smaller active interposer. As such, adoption did not yet make sense until the costs are reduced.https://spectrum.ieee.org/tech-talk/semiconductors...
The earlier ones show AMD did research into the optimal topology for active interposers in 2014 and 2015. It shows the optimal core count for the benefit, latencies, etc.
Now, if you look at packaging solutions, say from GF, you find that 2.5D and 3D integration is only available on 32nm and 28nm processes, with 14nm coming soon. https://www.globalfoundries.com/sites/default/file...
I bring up GF doing the packaging as, due to the WSA, even if they cost more than competitors, being able to potentially count the wafers used for the active interposers against the WSA would reduce the fines payed under that contract for doing fabrication at TSMC, making it potentially, on net, cheaper.
Now, there is an article today from Anand saying Intel is releasing the B365 chipset on 22nm. ( https://www.anandtech.com/show/13714/intel-adds-b3... ) Intel was forced to move chipsets back to 22nm due to the 10nm process not being ready for mass deployment which caused a shortage of 14nm capacity. This means Intel could not shut down their 22nm fabs to cut that capacity. As such, a fab needs to stay as close to capacity as possible or else it bleeds money (hence why AMD eventually went fabless). So, Intel using 22nm fabs to do the 22nm active interposer is just Intel making lemonade on tech that few others adopted in the industry yet due to the basis of costs. If you go back to the cost study AMD did, the active interposer at 20nm is around the cost of doing a monolithic die, which is what Intel has done to date. So it isn't really costing them more to add in the active interposer at all, while saving expenditures by keeping the 22nm fabs full of jobs, while having awesome yields on the active interposers due to it being a very mature node with low defect rates, etc. If you examine the amount of area AMD estimated is needed for the logic routers, you can see only 1%-10% area on the active interposer is needed to achieve the goal, meaning the chances a critical defect hits the active interposer is very low.
But, as to it making AMD do it, that is the wrong thinking. AMD already plans to adopt it, just not until the costs of doing so are lower. They will check the IC Knowledge lists to see when costs make sense.
But, if you look at all that data I provided, AMD has all the solutions to the problems encountered with using an active interposer. All they are waiting for is it to be cost effective. Intel isn't leading anything here, other than doing it because they need to justify and subsidize having to keep certain 22nm fabs open because of their chipsets due to the delays in 10nm. Doesn't take a genius to figure it out, just takes someone paying attention to the tea leaves.
ajc9988 - Thursday, December 13, 2018 - link
Also, I forgot to mention that the 2.5D integration of HBM Phy onto a stitched interposer was accomplished last year. This suggests that HBM could be added to an active interposer when AMD eventually does adopt an active interposer. Meanwhile, with only 1-10% of the active interposer being used according to their papers, it leaves room for in the future having elements of the I/O chip being moved to the active interposer as an additional way that things easily could develop. The question is what benefit it would be to produce what on the 32nm or 28nm nodes over having their I/O disintegrated die on 14nm. But I would bet AMD has an idea of what would be better where and will consider it when eventually adopting, considered the detailed paper on routing topology of active interposers in that group of links.porcupineLTD - Thursday, December 13, 2018 - link
Thank you for the detailed response its very informative however its besides my comment, IF Intel ships its chiplet on interposer design in 2019 they will be first (regardless of the reasons or economics behind it), as for AMD not doing it now it's again besides my comment, I wandered if they will do this for ZEN 3 or 4 not ZEN2, ZEN 3 will launch in 2020 when (as far as I know) the WSA expires so what GF does or doesn't do is not that relevant (I would be curious to know about tsmc's or samsung's packaging/interposer offerings). The result (if they implement it in ZEN3/4) will come down to cost (as you point out), socket compatibility (can it be done on the current server socket?) and planning (ZEN3 design is almost done right now so even if the interposer cost is down AMD can't implement it if they didn't predict this one year ago).porcupineLTD - Thursday, December 13, 2018 - link
Also Anandtech, I am not versed in web design, but is it really that hard to have a comment section that doesn't utterly suck? Is it really that hard to introduce an edit function, or at least a delete one?(an upvote/donvote system would be asking for too much).ajc9988 - Thursday, December 13, 2018 - link
Forgive me, many think that Intel is showing through using it that they are significantly ahead in the field of active interposers and 2.5D and 3D chiplet integration, meaning that it would take years for other chip designers to catch up, which isn't the case. So I do apologize for that assumption in regards to you.What those do show is AMD does have plans to do so in the future, it is just a matter of timing. On the socket part, there is a chance they are introducing a new socket in order to support PCIe 4.0 on the upcoming Zen 2 chips, which comes from analyzing their wording from the Next Horizon event on Nov. 6th. Meanwhile, we know that PCIe 5.0 will be finalized potentially in the first half of next year and AMD did mention DDR5 potentially being available for Epyc 3 chips based on Zen 3, but that mainstream chips will not support DDR5 in 2020 (leaves open that TR HEDT platforms may or may not get DDR5 support at that time). Intel has not provided information on when PCIe 4.0, 5.0, or DDR5 will be supported. As such, though, AMD may have backwards compatibility on the CPUs for socket compatibility, but may require a new socket for the new boards containing the new features, which I think is understandable to many in the server and workstation spheres. It is also the reason I may wait for 2020 to upgrade my 1950X rather than next year (if you are going to buy a new board, and there is a chance that board won't contain the new feature sets that release that year, waiting one more year is fine IF your workloads won't suffer for the wait).
But, I read somewhere Intel hinted at 1GB of RAM being integrated with the active interposer processors, acting as an L4 cache. Although not novel (those articles I gave for AMD whitepapers from 2014 and 2015 specifically dealt with on package integration of memory and latencies involved, suggesting we could see some type of 3d memory solution integrated when AMD does incorporate an active interposer), seeing the latencies involved with Crystalwell, which was the eDRAM on Broadwell, suggests that Intel will get a significant uplift in certain workloads, as well as keeping the chip primed and having to go off chip less often for memory calls, which is fantastic. Intel also kept the power delivery part under wraps, which is something that does excite me, but we were given no information about it (possibly bringing FIVR back, which was rumored for Ice and Tiger lake anyways).
Also, on compatibility, part of the reason I gave the article discussing data protocols for active interposers is that the routing is chiplet agnostic, meaning you can integrate parts that have their own internal routing and not effect it. Then just comes down to appropriate socket wiring, which I mentioned they may need new sockets for these features, while just maintaining drop in compatibility for consumer side.
Here is some information on other packaging types out there in the market. I am including them, but as this is just being researched, I am not attesting veracity or relevance, fully. I will have glanced through them, but you understand.
http://s3.amazonaws.com/sdieee/1817-SanDiegoCPMTDL...
"DIGITIMES reports that the new TSMC plant in Chunan will be dedicated to offering the foundry’s in-house developed advanced packaging technologies: CoWoS (chip-on-wafer-on-substrate) and integrated fan-out (InFO) wafer-level packaging, and its newly-unveiled system-on-integrated-chips (SoIC) and wafer-on-wafer (WoW) packaging services."
https://criticalmaterials.org/tsmc-to-set-up-new-f...
https://fudzilla.com/news/pc-hardware/47265-tsmc-t...
https://electroiq.com/2018/10/synopsys-design-plat...
Older article for foundational work from 2014:
https://semiengineering.com/time-to-revisit-2-5d-a...
2016 follow up: https://semiengineering.com/2-5d-becomes-real/
Packaging Market from February 2018:
http://worldherald24.com/2018/02/19/global-3d-ic-a...
Paper and slide show discussing the implementation and challenges of 3D packaging from 2016:
https://smtnet.com/library/files/upload/25d-3d-sem...
Sorry for the data dump at the end. I do apologize about that.
tommybobberson - Saturday, December 15, 2018 - link
just clarifying, do you mean to use such methods, amd would need to rive up production costs which reduces their profit/value. but since intel already has a high production cost, such methods would not affect them?ajc9988 - Wednesday, December 19, 2018 - link
tommybobberson - in a manner of speaking, yes. How much it costs to produce effects margins and pricing. One reason AMD could price the high core chips in such a manner, aside from effective yields for products, is the decreased costs to produce high core count products. If AMD added the active interposer now, it would rise production costs to be the same as a monolithic die, which would then require either an increase in market price of the products or a decrease in margins for their products.Intel, on the other hand, never captured the extra margin of going to chiplets before introducing the stacked chips. That means that the active interposer cost to them is less as the price to do so matches the cost of a monolithic die. As such, Intel can maintain their current pricing while introducing the stacked die on active interposer. Further, due to filling up 22nm fab time (because of 10nm causing a shortage of 14nm fab time which required Intel to move certain lines back to 22nm, likely forcing some plants planned to be closed or retrofitted to remain on 22nm), Intel will be able to help on the cost of 10nm delay, thereby mitigating some of the effects found in not having 10nm up and running by now.
Supercell99 - Friday, December 14, 2018 - link
With all the hackers, PC gaming is dying.sgeocla - Wednesday, December 12, 2018 - link
Fist they mock your chiplets by saying you glue your dies together, then they fail to compete with you and then years after they copy your chiplets and call it revolutionary.This is why Intel can no longer have public presentations of their own failures and have to have privately controlled presentations.
CajunArson - Wednesday, December 12, 2018 - link
There's nothing whatsoever revolutionary about "chiplets". A 10 year old core 2 quad used exactly the same technology that AMD calls "chiplets" in 2019 and AMD fantards like you even insulted the Core 2 quad for doing it.Maybe you should actually read the article about what a active interposer can do vs. wiring standard hunks of silicon to a PCB in literally the same way it was done in the 1970s before you run around acting like AMD is the only company to ever innovate anything.
sgeocla - Wednesday, December 12, 2018 - link
I've been reading articles about Intel 10nm promises for years now. And then we got failed laptop chips and low power pc boxes to appease 'mass production' status and not get sued by investors for false claims.Forgive me if I abstain my drooling until Intel actually delivers something that does not require industrial chillers. BTY where is that 28 core HEDT chips anyway ?
Targon - Wednesday, December 12, 2018 - link
There is always a point where the WHEN something is used sets a new direction. Multi-CPU in a workstation/server moving to a single processor with multiple cores was a big shift. Moving from two cores linked together when cache coherency was a big problem to a single dual-core without an interposer was a better way to go. It all comes down to if there is a performance boost or degradation as a result of the technology/implementation.With that said, a single CPU with 64 cores is fairly significant, and keeping the performance from being horrible with that many cores is the reason AMD has been praised. Price/performance for the server market and such.
For a long time, Intel was seen as the king when it came to clock speeds and performance, but Intel hasn't had a significant boost to IPC in over three years. Intel has also been promising 10nm for three years, and still no sign of it, with the promise of holidays 2019.
So, Intel still has nothing, they have vague promises of ways they will improve performance, but it remains to be seen if the performance will actually be better if 10nm slips again. On the flip side, AMD clearly has significant performance boosts from Ryzen 3rd generation in 2019(March/April being when many expect it). 7nm from AMD isn't a, "will they?" question, it isn't even a "when?", with CES in one month and with it, the answers. IPC improvements due to design improvements not related to chiplets at all would be good, as well as higher clock speeds. So, there is a potential for 30+ percent higher performance in one generation.
Yes, I don't expect AMD to deliver huge performance jumps again for years, but we may see things such as Gen-Z support, going beyond two memory channels for the mainstream Ryzen chips when the next socket comes out in 2020/2021, and other things that may boost system/platform performance while AMD figures out how to get more CPU performance.
Intel is still trying to do things the same way, just faster. Faster CPU, faster links to individual devices, fabric on a system level will be Intel trying to reinvent what AMD has been working toward.
I will also note again that some things are not always about being new, but are more about presentation and implementation. Palm really popularized the idea of apps that users could install on a small portable device(PDA), but Apple popularized it with the iPhone. In some cases, the implementation really is good, and will get the respect of the industry, in other cases, you see that something is clearly a case of following the lead of another player.
So, in the PC industry, is Intel leading the way with innovations, or is AMD in the drivers seat?
iwod - Thursday, December 13, 2018 - link
No one insulted Core 2 Quad for doing it, and neither did AMD. But Intel did Insult AMD and went on full force bad mouthing AMD.Spunjji - Thursday, December 13, 2018 - link
Using a term like "fantard" straight-up devalues your argument, but the blatantly false statement about the C2Q using "exactly the same technology" seals the deal.Chiplets refers to the CPU being divided into multiple sections (cores and un-core) on a single package using dedicated interconnects. It's not at all the same technology as having two discrete CPUs joined by the FSB on a single package. Both are novel approaches to particular problems, although the C2Q (and Pentium D before it) were criticized for their inefficiency by using the FSB for inter-core communication. We don't know how "chiplets" will pan out yet, so the jury's out.
Bash the fans for talking nonsense all you want, but maybe don't sink to their level.
edzieba - Wednesday, December 12, 2018 - link
If you think through-package interconnects compare to through-silicon interconnects, then I have some HBM on DIMMs to sell you.Spunjji - Thursday, December 13, 2018 - link
Noice. :DIII-V - Wednesday, December 12, 2018 - link
I love how everyone thinks AMD is the pioneer with chiplets. They're not. That would be Marvell.And Intel themselves has been hinting that it's a good way to go, looking at their EMIB solution.
But AMD fan boys are a special breed of stupid...
sgeocla - Wednesday, December 12, 2018 - link
The electric car was pioneered more than a hundred years.It's one thing to pioneer something and a whole different thing to actually develop it into something that is affordable to millions and drags the whole industry forward.
If you think pioneering is all there is to it I have hundreds of grapehene battery designs you should invest you narrow-minded-driven life savings into.
evernessince - Wednesday, December 12, 2018 - link
You have some issues buddy. How about not being toxic next time.prisonerX - Wednesday, December 12, 2018 - link
And you're a special brand of pathetic. Congrats.Don't worry though, one day you'll hit puberty and things will improve.
LogitechFan - Wednesday, December 12, 2018 - link
Oh yes, amdturds are THE breed of stupid to behold...drunkenmaster - Wednesday, December 12, 2018 - link
Would that be Marvell's chip from 3 years ago that suddenly people are talking about... which is just a chip on an interposer with an HBM chip. HBM which AMD co-developed, had prototype APU packages using it in 2011 and launched Fury X with HBM on an interposer 3 years ago.Right, but AMD didn't pioneer it, because Marvell made a dramatically less complex switch using the same concept at the same time AMD did, but you know, AMD co-developed the memory, they worked with the packaging plants for years to be able to mass produce interposer packaged products and Fury X was the reason HBM went into full production in the first place.... but sure, AMD definitely didn't pioneer this latest move.
ajc9988 - Wednesday, December 12, 2018 - link
http://www.eecg.toronto.edu/~enright/micro14-inter... http://www.eecg.toronto.edu/~enright/Kannan_MICRO4... https://youtu.be/G3kGSbWFig4 https://seal.ece.ucsb.edu/sites/seal.ece.ucsb.edu/... https://www.youtube.com/watch?v=d3RVwLa3EmM&t=...qap - Wednesday, December 12, 2018 - link
Well ... yeah. Intel had multi-chip CPUs almost 15 years ago (look for Pentium D). And it was mocked by AMD and its fans.I thought that the mocking intel did last month was just to remind AMD, what they said then and I found it hilarious. But maybe not. People don't change and it's completely plausible, that it was said by someone, who does't know the history.
Topweasel - Wednesday, December 12, 2018 - link
Never Mocked by AMD. They just made sure to point out that their 2 cores with the X2 and quad with the Phenom where "real" multicore chips. It mattered more back then because Intel's boards had FSB's instead of direct connections and the separate chips had to talk through the FSB and chipset northbridge to talk to each other. Also AMD hadn't been rambling on thier glue tech for years (with no product yet on market using it (Kaby-G isn't actual EMIB)) when they made their statements.AMD promoted their products said they are better because they were real. There is a negative connotation with that. But that is different then calling out your competitor for using "glue".
29a - Wednesday, December 12, 2018 - link
"Well ... yeah. Intel had multi-chip CPUs almost 15 years ago (look for Pentium D). And it was mocked by AMD and its fans."I think pretty much everyone mocked anything P4 related, you have to admit it was pretty bad.
FreckledTrout - Wednesday, December 12, 2018 - link
What's P4? To flush it down the toilet.tshoobs - Wednesday, December 12, 2018 - link
Never heard that one before! So true, the P4 was such an embarrassment.JlHADJOE - Wednesday, December 12, 2018 - link
lol this takes me right back to the days of the P4, and the awesome "x is good... for me to POOP on!" memef1nalpr1m3 - Wednesday, December 12, 2018 - link
Yeah, they totally put it together in three weeks and everything.Get a clue.
pkgtech - Thursday, December 20, 2018 - link
For years Sohail Ahmed was the roadblock to Intel doing any creative package technology by blocking silicon support. He finally was shown for what he was after 2+ year process technology delays and re-invented this. Good riddance SohailAdonisds - Wednesday, December 12, 2018 - link
How do the Intel process delays influence their microarchitecture plans? 10 nm was already supposed to be here and with it its new microarchitecture, Ice Lake. Does the Ice Lake design continue to get improved as the delays kept happening or was it finalized years ago? Why? What about the microarchutectures succeeding Ice Lake?III-V - Wednesday, December 12, 2018 - link
This is Ice Lake. Might be a 14nm port (unclear at this point why the name change), but it's at the very least a close relative.III-V - Wednesday, December 12, 2018 - link
Actually, given the name on the heat sink, it is probably just straight up Ice LakeHStewart - Thursday, December 13, 2018 - link
I believe the road map states Ice Lake will be on 10nm. More importantly it will be on Sunny Cove which is significant update to Architexture.AdhesiveTeflon - Wednesday, December 12, 2018 - link
Intel and their naming scheme....they should call one "blue slushie lake"prisonerX - Wednesday, December 12, 2018 - link
Plans? Intel are in full scale panic mode right now. I'm sure they have new plans every week.jjj - Wednesday, December 12, 2018 - link
LOL you went in full fanboy mode with the 144mm package being small.The package is that size for PoP, the die is much much smaller, they can fit way more 10nm cores than that in such a large area, even without a base die.
Anyway, there are no relevant details on Foveros and that's problematic. The first question is cost, then you would want to know details about pitch and so on. Intel is by no means the first to announce such a solution so the details are what matters.
This is how you get to a 3D monolithic die in some years so any foundry that wants to stay in the most advanced node game, needs to push the packaging roadmap.
Ian Cutress - Wednesday, December 12, 2018 - link
I specifically said the package was small, and the die was smaller than the package.jjj - Wednesday, December 12, 2018 - link
And you acted like someone clueless that has put all his savings in Intel's stock based on brad, "Large revenue stream" LOL, you don't know anything about the product, the target market, about the packaging solution but you get all wet.Anyway, took a look at the slides and they do actually provide some relevant details that yuo seem to not notice. They say face-face, bump pitch at 36um and bump density at 828.mm2.
Spunjji - Thursday, December 13, 2018 - link
Your comments are unnecessarily aggressive, dripping with unwarranted confidence and really unpleasant to read. Please spend more time offline.jjj - Wednesday, December 12, 2018 - link
For the hell of it, I'll add this IMEC slide (different versions out there but this one was the first i could find)http://www.techdesignforums.com/practice/files/201...
iwod - Thursday, December 13, 2018 - link
ignore the trolls.Raqia - Wednesday, December 12, 2018 - link
That's a nice fan on their fanless SoC design. It seems like Intel is really playing catch up with their ultra mobile designs (with a power envelope that's likely still above phones) fabbed on a late 10nm design and only now doing things like heterogeneous cores and PoP memory that the AX and Snapdragons have been doing for a few years now.I wonder what portion of that block diagram's compute reside on the P1222 (which looks bigger die space wise on their diagram than the P1274 but may pack fewer transistors.) They mentioned IO but it wouldn't make too much performance sense for caches and coherency to segregate the Big/Little cores onto separate dies, so it likely contains the ISPs and memory controllers. Where the GPU is located is another interesting question and no LTE modem can be seen on the block diagram. This is also likely to have higher packaging cost and worse integration than the AX, Kirins, PXX's or Snapdragons.
A5 - Wednesday, December 12, 2018 - link
Dev boards have fans because they don't have a chassis to sink heat into. If it's like every other dev board I've used, it is probably a 8k RPM screamer, too.The_Assimilator - Wednesday, December 12, 2018 - link
Seems like you failed to read page 4: "These were development systems with these funky looking heatsinks and loud fans to ensure there was no thermal throttling."Raqia - Wednesday, December 12, 2018 - link
My comment was in reference to their new Atom lineup which are supposed to be fanless, not Sunny Cove which operates in a much higher power envelope. Surely they could use a larger passive cooling unit to demonstrate what's supposed to be a key feature of that product; my other point is that Intel's aiming for something later next year that seems much less advanced than what Apple, Huawei, Mediatek, and Qualcomm already have in production today.The_Assimilator - Thursday, December 13, 2018 - link
And, again, the Fovoros setup was a demo board in the open air, not a production chip inside a device designed around dissipating the heat it outputs.Spunjji - Thursday, December 13, 2018 - link
If that's what you took from that part of the presentation then I feel you missed the point. What Intel are doing in terms of chip layout is something none of the companies you mentioned are able to do yet, which is the fundamental point here - whether it results in a useful product is another matter entirely, but it's still fascinating and not same-same.To back up the other people who responded to you, again, the fan is irrelevant. It's a dev board.
Raqia - Thursday, December 13, 2018 - link
Your point is taken and Keller did say it was in its infancy, but I am interested in whether what we're seeing here will be a competitive product or will remain an interesting science experiment. There are theoretical benefits of stacking high performance dies on low leakage ones like this but also substantial challenges and deficiencies which the current iteration doesn't show that it has overcome. What we might see in benefit in terms of better overall area, lower package level fab rejection rates, and better net power characteristic could be offset by a worse concentration of heat and hence more throttling when both elements are running or more expensive packaging. Perhaps in the end, a monolithic die is a better compromise despite losing out on some metrics for mobile.nico_mach - Wednesday, December 12, 2018 - link
So the GPU is going to be called ... Ten to the Eeth power? Is that right?I reject all these Xes used in unpredictable ways. The iPhones are pronounced exar and excess. This is ecksee, and I still use oh ess ecks on my emm bee eh at home.
Jon Tseng - Wednesday, December 12, 2018 - link
>Intel actually says that the reason why this product came about is because a customer>asked for a product of about this performance but with a 2 mW standby power state.
Huh wonder who the customer for that Core/Atom hybrid is. Seems a bit overpowered for a tablet. A bit underpowered for a MacBook (or for a car). Chromebooks maybe but most are too low volume to demand a custom part (maybe the education market is taking off?). PC OEMs don't normally take such custom parts for their laptops. But the graphics loadout implies some kind of PC-type application?
Any ideas??
HStewart - Wednesday, December 12, 2018 - link
From the diagram, it appears that hybrid cpu - has single Core CPU with 4 small (Atom) CPU's - such technology is done with Samsung Processors - this would mean it still lower power - but still have primary single thread core speed.Most interesting would be how the smaller core are used in scheduling system. Most like means and enhancement in OS for proper usage.
A5 - Wednesday, December 12, 2018 - link
There aren't a ton of companies big enough to make Intel create a new product line just for them.The whole list is probably Apple/HP/Dell. Maybe Microsoft.
The_Assimilator - Wednesday, December 12, 2018 - link
Microsoft Surface, obviously. It's become a very profitable line for MS but the current models are either too battery-hungry (Core CPUs) or too slow (Atom CPUs). Fovoros will give the best of both these worlds while also being x86... priced right, a Fovoros-based Surface will essentially end any argument for iPads in a business environment, especially considering most software remains firmly single-threaded. But it remains to be seen whether (a) Intel can get the power down even further (7W is still double most smartphones) and (b) whether their big.LITTLE implementation is good enough.Raqia - Wednesday, December 12, 2018 - link
Windows on ARM will do just fine now that Visual Studio emits ARM native code. Once Chrome gets ported (and that will be soon https://www.neowin.net/news/both-chromium-and-fire... the platform should address 95% of typical daily use cases and provide substantial compatibility with legacy software / file formats. This is better value than iPads and upcoming dedicated SoCs like the 8cx should offer better performance and battery / heat characteristics than what Intel has planned for next year in the same power envelope.The_Assimilator - Thursday, December 13, 2018 - link
I think you missed the part where Windows on ARM is horribly slow and therefore shitty. As a result, Microsoft has no plans to port anything useful (e.g. Office) to ARM, which means Windows on ARM is stuck being the lowest of the low-end. And that's not a space that Surface is intended to play in; Surface is an iPad competitor, and an iPad competitor can't be slow and shitty. Business devices can't be slow and shitty, and they absolutely need to be able to run Office.I expect that either Windows on ARM will be allowed to wither and die once Fovoros ships, or it will languish in a dead zone whereby only the cheapest of the cheap devices by no-name-brand OEMs (think $100 Lenovo tablets) use ARM chips and hence need it.
So unless Qualcomm's 8cx is a game-changer in terms of performance, Fovoros should be the end of ARM on desktop, and thank fucking God for that.
Spunjji - Thursday, December 13, 2018 - link
Microsoft already have an Office code base on ARM, so I'm not sure what you're talking about there.What would worry me about an Intel BIG.little style design is that if Windows doesn't assign your performance-critical application to the correct (big) core, performance will mostly suck just as hard as if all your cores were Atom.
As such, I'd be cautious on calling a winner just yet.
gamerk2 - Thursday, December 13, 2018 - link
Agreed with this; Microsoft has been let down by Intel not having a good mobile platform. If it were up to them, they wouldn't bother with ARM, but they have to due to battery/power/heat requirements.zodiacfml - Thursday, December 13, 2018 - link
YESRaqia - Thursday, December 13, 2018 - link
For ultra-mobile, not only are battery/power/heat issues but supply is one as well due to Intel being locked down to their own manufacturing division. On top of that, they have a lock on x86 by not licensing to any competitors but AMD, who despite competitive stretches inevitably stumbles (either due to themselves or Intel's non-engineering financial efforts) and leaves the industry with dry spells of performance improvements. Intel's gross margins on their chips remain >60% as a result whereas ARM SoCs even after licensing is closer to 20-30%.Raqia - Thursday, December 13, 2018 - link
Keller declared that the technology is in its infancy, and feature wise the 2019 version of the Atom simply won't be competitive with leading ARM SoCs like the 8cx. The slowness you refer to only occurs when running native 32 bit x86 code on the WOW emulation layer, but the value of this feature is mostly in the compatibility being there at all. If performance and compatibility of legacy code matters to you then certainly Windows on Arm isn't suitable. However, it will matter even less now with the new native compilation tools and ports of important sub platforms like Chromium.29a - Thursday, December 13, 2018 - link
"Windows on ARM is horribly slow and therefore shitty."Sounds a lot like Windows on Atom.
MonkeyPaw - Saturday, December 15, 2018 - link
I’m betting Apple wanted one for MacBook Air, or maybe MS for Surface Go. It would be the right amount of performance for both devices, an both companies would have the clout to get it done. I’d lean toward Apple because the GPU is pretty big.Kevin G - Wednesday, December 12, 2018 - link
Typo:"a physical address space up to 52 bits. This means, according to Intel, that the server processors could theoretically support 4 TB of memory per socket."
That should be petabytes instead of terabytes. The limit is for an entire system, not per socket as additional sockets will not grant any additional capacity.
gamerk2 - Thursday, December 13, 2018 - link
NUMA systems could potentially be per-socket rather then OS wide.HStewart - Wednesday, December 12, 2018 - link
It sounds like Intel has been working on increasing performance in two ways1. 7nm change for the future - because of limitations found with 10nm
2. 10nm enhance for corrections for performance of issues with Cannon Lake
But most importantly, architexture improvements like faster single thread execution and new instructions and multi-core improvements will in long term significantly improve performance
ishould - Wednesday, December 12, 2018 - link
Forgive me if I take 2 metric tons of salt with any roadmaps Intel provides these days. They haven't exactly had the most accurate timelines as of late (past four years)HStewart - Wednesday, December 12, 2018 - link
It appears they realize that and coming out with document to indicated they have made corrections - this is better than not knowing what they are planning - or as some AMD Fans would like to believe that they lost the battle.Spunjji - Thursday, December 13, 2018 - link
Is it better? Their last roadmaps were not worth the powerpoint slides they showed up in, not to mention the whole "tick-tock-optimise-optimise-delay" fiasco.HStewart - Thursday, December 13, 2018 - link
From the look of things in this excellent article - it looks for 2019 Intel is combining both tick and tock together with significant architexture improvement along with process improvements.johannesburgel - Thursday, December 13, 2018 - link
Compared to the latest Xeon roadmaps I have seen in NDA meetings, these desktop roadmaps still seem quite ambitious. They don't expect to ship a "lower core count" 10nm Xeon before mid-2020.HStewart - Thursday, December 13, 2018 - link
Just because Intel did not mention it - does not mean it will not happen.Also remember that Intel is decoupling the process from actual Architexture. In the past, I alway remember the Xeon technologies were forerunner's of base core technology. Hyperthreading is one example and multiple core support.
Vesperan - Wednesday, December 12, 2018 - link
Its 6am for me, and with the mugshots of Jim Keller and Raja Koduri at the end you could have labelled this the AMD architecture day and I would have believed you. It will be an interesting several years as those two put their stamp on Intel CPU/GPUs.The_Assimilator - Wednesday, December 12, 2018 - link
So Intel is going to take another poke at the smartphone market it seems. Well, let's hope Fovoros fares better than the last half-dozen attempts.Rudde - Wednesday, December 12, 2018 - link
7W is too much for a smartphones power budget. Smartphones operate at sub 1W power budget.johannesburgel - Wednesday, December 12, 2018 - link
The just announced Qualcomm Snapdragon 855 has a peak TDP of 5 Watts. Most smartphone manufacturers limit the whole SoC to 4 watts. The average smartphone battery now has >10 Wh, so even at full load the device would still run between 1.5 (display on) and 3 (display off) hours. Which it has to in the hands of those gamer kids.YoloPascual - Wednesday, December 12, 2018 - link
Had the og zenfone with the intel soc. It drains battery as a gas guzzler suv. Never buying a smartphone with intel inside ever again.Mr Perfect - Wednesday, December 12, 2018 - link
It's exciting to see Intel use FreeSync in their graphics. They could have easily gone with some proprietary solution, then we'd have three competing monitor types. Hopefully having both AMD and Intel on FreeSync will prompt Nvidia to at least support it alongside G-Sync.Spunjji - Thursday, December 13, 2018 - link
They committed to Adaptive Sync back with Skylake, but it's taken this long to see it because they haven't released a new GPU design since then. It would have been a *very* weird move to suddenly release their own tech.gamerk2 - Thursday, December 13, 2018 - link
I think it's more likely NVIDIA just waits for HDMI 2.1, which supports VRR as part of the specification.I also suspect HDMI 2.1 will eventually kill of Displayport entirely; Now that HDMI offers more bandwidth, and given Displayport is a non-factor in the consumer (TV) market, there really isn't a compelling reason for it to continue to exist alongside HDMI. We *really* don't need competing digital video connector standards, and HDMI isn't going anywhere.
edzieba - Thursday, December 13, 2018 - link
HDMI is fantastic for AV, but has NO PLACE WHATSOEVER for desktop monitors. It causes a multitude of problems due to abusing a standard intended for very specific combinations of resolutions and refresh rates (and a completely different colour range and colour space standards), add offers zero benefits. Get it the hell off the back of my GPU where it wastes space that could be occupied by a far more useful DP++ connectorIcehawk - Thursday, December 13, 2018 - link
Setting all else aside - DP is "better" because the plugs lock IMO. HDMI and mini-DP both have no retention system and that makes it something I do my best to avoid both personally and professionally, love the "my monitor doesn't work" calls when it's just you moved your dock and it wiggled the mini-DP connector.jcc5169 - Wednesday, December 12, 2018 - link
Intel will be at a perpetual disadvantage because byt the time they bring our 7nm product, AMD will have been delivering for 2 whole years.shabby - Wednesday, December 12, 2018 - link
You belive tsmc's 7nm is equal to Intel's 7nm?silverblue - Wednesday, December 12, 2018 - link
7nm != 7nm in this case; in fact, Intel's 10nm process looks to be just as dense as TSMC's 7nm. I think the question is more about how quickly TSMC/GF/Samsung can offer a 5nm process, because I wouldn't expect a manufacturing lead anytime soon (assuming 10nm processors come out on time).YoloPascual - Wednesday, December 12, 2018 - link
10nm iNTeL iS bEttER tHAn 7nm TSMC???ajc9988 - Wednesday, December 12, 2018 - link
The nodes are marketing jargon. Intel's 10nm=TSMC 7nm for intents and purposes. Intel's 7nm=TSMC5n/3nm, approximately. TSMC is doing volume 5nm EUV next year, IIRC, for Apple during H2, while working on 7nm EUV for AMD (or something like that) with 5nm being offered in 2020 products alongside 7nm EUV. Intel's current info shows 7nm for 2021 with EUV, but that is about the time that TSMC is going to get 3nm, alongside Samsung which is keeping up on process roughly alongside TSMC. Intel will never again have a lead like they had. They bet on EUV and partners couldn't deliver, then they just kept doing Skylake refreshes instead of porting designs back to 14nm like the one engineer said he told them to do and Intel didn't listen.I see nothing ground breaking from Intel unless they can solve the Cobalt issues, as due to the resistances at the size of the connections at the smaller nodes, Cobalt is a necessity. TSMC is waiting to deal with Cobalt, same with Samsung, while Intel uses that and Ruthenium. Meanwhile, Intel waited so long on EUV to be ready, they gave up waiting and instead are waiting for that to mature while TSMC and Samsung are pushing ahead with it, even with the known mask issues and pellicles not being ready. The race is fierce, but unless someone falters or TSMC and Samsung can't figure out Cobalt or other III-V materials when Intel cracks the code, no one will have a clear lead by years moving forward. And use of an active interposer doesn't guarantee a clear lead, as others have the tech (including AMD) but have chosen not to use it on cost basis to date. Intel had to push chipsets back onto 22nm plants that were going to be shut down. Now that they cannot be shut down, keeping them full to justify the expense is key, and 22nm active interposers on processes that have been around the better part of the last decade (high yield, low costs due to maturity) is a good way to achieve that goal. In fact, producing at 32nm and below, in AMD's cost analysis, shows that the price is the same as doing a monolithic die. That means, since Intel never got a taste of chiplets giving better margins with an MCM, Intel won't feel a hit by going straight for the active interposer, as the cost is going to be roughly what their monolithic dies cost.
porcupineLTD - Thursday, December 13, 2018 - link
TSMC will start risc production of 5nm in late 2019 at the earliest, next apple SOC will be 7nm+(EUV) and so will zen 3.ajc9988 - Thursday, December 13, 2018 - link
https://www.anandtech.com/show/13445/tsmc-first-7n...Risk production is in Q2 next year. And Mass is listed by Q2 2020 for 5nm.
https://www.extremetech.com/mobile/278800-tsmc-exp...
So, I was a bit off by the estimate for volume being 2020, but you were off on when risk production starts. Meanwhile, 7nm+ is already confirmed for AMD on Zen3, as the benefits of 5nm+ don't outweigh the costs associated moving to the process for AMD. This is why it is thought AMD will skip 5nm and try 3nm when available. But, TSMC has not said when 3nm will be available, while Samsung is saying 3nm in 2021:
https://semiengineering.com/big-trouble-at-3nm/
https://www.cdrinfo.com/d7/content/samsung-details...
http://www.semimedia.cc/?p=2524 (saying TSMC 3nm in 2022/23)
I cannot find the article speculating Apple will be the first customer on 5nm EUV and when ATM.
HStewart - Thursday, December 13, 2018 - link
"Nodes are marketing jargon"Exactly - it reminds me the frequence wars back in P4 days. But if you look closely at Intel's plan - I am no chip designer - even though I did take Micro-code Enginnering classes in College, but Foveros is revolutionary design - I thought EMiB was amazing, but to do that in 3rd diminsion is awesome - maybe one they could even stack cores that way - instead huge chip monsters.
But a nm rating by vendor 1 does not nm rating by vendor 2 - what underneath makes the different - Intel is extremely smart to decouple nm process from actual archexture. If you notice by Intel archiexture Intel has more improvements in core archiexture over next 3 years - this is because they are not limited by process (nm)
ajc9988 - Friday, December 14, 2018 - link
EMIB was not revolutionary and neither is foveros. They are incremental steps and existing competing solutions are available and have been for some time. Not only that, it will only be used on select products with eventual spread to the stack.Go to the second page of comments and see my links there. I think you will find those quite interesting. Not only that, this has been done with HBM for years now. If you look at AMD's research, almost half a decade ago, they were studying optimal topologies for active interposers. They found only 1-10% of the area was needed for the logic routing of an active interposer. Moving a couple I/O items onto the active interposer just is an extension. In fact, you can put those components on a spread out interposer between the above chiplets that sit on the interposer, but would need to plan on the heat dissipation or having so low a heat that it doesn't need sinked.
Considering lack of details of what is on the active interposer or timeline for mainstream, HEDT, and server markets, I will assume those won't see this until 2020, with the first products being mobile in nature.
In fact, Intel this summer gave AIB patents to DARPA to try to control what tech is used for chiplets moving forward, proposing that be used. AMD proposed a routing logic protocol which would be agnostic to routing on the chiplets itself, increasing compatibility moving forward.
Now, if EMIB is so "revolutionary", do the Intel with AMD GPUs seem revolutionary? Because that is the only product that comes to mind that uses it. Those chips are Hyades Canyon and Crimson Canyon. It isn't that dissimilar to other data fabric uses.
So far, on disintegration of chip components, AMD's Epyc 2 is getting there. It literally uses just cores and the interconnect for the chiplet (for this description, I am including cache with the cores, but when latency is reduced with active interposers, I do expect an L3 or L4 or higher caches or integrated memory on package to be introduced external to the "core" chiplet moving forward). From there, we could see the I/O elements further subdivided, we could see GPU, modems, etc. But all of this has been planned since the 2000s, so I don't see anything new other than the culmination around the same time other alternative solutions are being offered, just that the cost/benefit analysis has not tipped in its favor just yet, but should in the next year or so, which should bring many more designs to the forefront. Here is a presentation slideshow discussing the state of current 2.5D and 3D packaging. After review, I'd like to hear if you still think EMIB and Foveros are "revolutionary." Don't get me wrong, they are an incremental success and should be honored as such. But revolutionary is too strong a word for incremental process. Overall, it changes nothing and is the culmination of a lot of work over a decade by numerous companies and engineers. Even competing solutions can act as inspiration for another company moving forward and Intel's engineers read the whitepapers and published peer reviewed articles on the cutting edge, just like everyone else in the industry.
As to you saying Intel is smart to do it, they haven't done it except in silicon in labs and in papers, unless talking the EMIB with Intel CPU. AMD has a product line, Epyc 2, where the I/O is made at GF on 14nm and the chiplet is made on 7nm TSMC with greater pitch disparity. Intel hasn't really removed the components off the core chip yet into each separate element. ARM is considering something similar, and this is the logical progression for the custom designed chips for RISC V moving forward (may take a little longer, less well funded).
Meanwhile, this doesn't seem to stack high performance cores on high performance cores. The problem of thermals cooking the chip are too great to solve at this moment, which is why low power components are being placed relative to the higher performance (read as higher heat producing) components. Nothing wrong with that, it makes sense.
But, what doesn't make sense is your flowering lavish praise on Intel for something that doesn't seem all that extraordinary in light of the industry as a whole.
johannesburgel - Thursday, December 13, 2018 - link
People keep saying the same thing about Intel's 14nm process, which is allegedly equal or better than other fab's 10nm processes. But AMD currently makes products on 14nm and 12 nm processes which Intel apparently can't build on its own 14nm process. For example there is still no 32-core Xeon while AMD will soon ship 64 core EPYCs and lots of other companies have 32/48/64 core designs on the market. Many Intel CPUs have much higher effective TDPs than their equivalent AMD CPUs.So pardon me if I am not willing to simply believe in all this "Intel's process is better in the end" talk.
HStewart - Thursday, December 13, 2018 - link
But intel's single core performance is better than AMD's single core performance. Just because AMD glues 8 core cpus together does not make them betterIcehawk - Thursday, December 13, 2018 - link
Node isn't even close to everything.Rudde - Wednesday, December 12, 2018 - link
Gen 11 graphics in desktops is said to reach double the performance of gen 9.5 desktop graphics. 2W Atoms have half the max frequency of desktop graphics and half or three quarters of the execution units. The 7W custom hybrid processor has the full amount of execution units. I'd guess it has half the frequency of it's desktop counterpart to stay within power limits. This would put it at the same performance as 9.5-gen desktop parts, or actually at 30% higher performance.Think about that. 80% single thread performance compared to current high-end desktop processors (my quick est.) and 130% graphics performance. That's a solid notebook for web browsing, legacy programs and even for light gaming. All that at a power budget of a tablet.
If I were to bet, I'd bet on a MS Surface Book.
Spunjji - Thursday, December 13, 2018 - link
Now that would be nice!Intel999 - Wednesday, December 12, 2018 - link
Keep in mind that 3DXpoint came to market three years past the initial promise from Intel. 10nm will be appearing 4 or 5 years late depending on when volume production materializes.Chances are that this 3D stacked promise for late 2019 will show up around 2022.
I'm seeing alot of fellow Intel fanboys show a semblance of confidence that has been absent in recent months and rightfully so.
Let's all hope Intel can deliver this time on time.
ajc9988 - Wednesday, December 12, 2018 - link
I disagree on worrying about Intel with the active interposer. They use passive interposers for the mesh on HEDT and Xeons and Xeon Phi (since around 2014) for years now. The 22nm active interposer is to fill out fab time due to pushing chipsets back to plants that were going to be shut down due to moving to 10nm, which never came.Meanwhile, AMD did a 2017 cost analysis saying that below 32nm would cost as much as a monolithic die, so it seems they are waiting due to cost, not on technical capability.
Either way, Intel doesn't hit 7nm until 2021, around the time 3nm may be ready at TSMC, if they go to 3nm within a year of volume 5nm products expected in 2020. That means Intel will never regain the process lead moving forward in any significant way, unless everyone else gets stuck on cobalt integration.
johannesburgel - Wednesday, December 12, 2018 - link
"We have a new method inside the company to decouple IP from the process technology. You must remember that customers buy the product, not a transistor family. It’s the same transformation AMD had to go through to change the design methodology when they were struggling."...doesn't that basically mean they're going fabless, or are at least going to develop a design for multiple processes so they can also use other fabs as well? Not that I'm disagreeing with that! If I was Intel I would have started doing so years ago, when everybody else was starting to do it.
anonomouse - Wednesday, December 12, 2018 - link
I think this just means fewer hand placed/routed custom logic blocks, and more synthesis/APR. If you look at most other dies these days, you see a sea of gates. When you look at intel dies, in the cpu cores you see very tight, regular logic blocks. Every mention of "abstraction" in the Q&A screamed synthesis/APR. This may make it possible for them to port to other foundries if they wanted to, but I doubt they would.An interesting question is whether this has any implications to power/timing/area optimization vs. the current approach, as a lot of their ability to push very high clocks might come from how much more custom logic design goes into their implementation.
kapg - Wednesday, December 12, 2018 - link
I guess in the image Intel 'CPU Core Roadmap' for Atom the name 'Next' Month is a typo from IntelAlexvrb - Wednesday, December 12, 2018 - link
"quad-channel memory controller (4x16-bit)"Well, we know the GPU won't be competitive with upcoming ARM designs, then. Otherwise, very neat mobile class chip design.
Arbie - Wednesday, December 12, 2018 - link
"Golden Cove ... is firmly in that 2021 segment ... we’re likely to see it on 10nm and or 7nm."Likely? If it isn't one of those two it will be big news indeed.
HStewart - Thursday, December 13, 2018 - link
10nm is already stated for Sunny Cove in 2019, so it likely 7nm - but keep in mind the process (nm) is decouple for process - so it could be 10nm or 7nmajc9988 - Wednesday, December 12, 2018 - link
Ian, the active interposer isn't new, and I am wanting o know more exactly what has been moved to the active interposer. AMD's whitepapers on the topic, using routers, etc., on an active interposer, was published in 2014 and a follow up on 2015. In late 2017, AMD published a white paper dealing with the costs of doing so, where producing an active interposer on smaller than 40nm resulted in costs being the same as producing a monolithic die. AdoredTV did two videos on this topic months ago, as a matter of fact. So, unless they are sticking some of the features that AMD put on the I/O die onto the active interposer, this is doing what AMD plans to do but chose not to on the basis of cost. Check out these papers and videos:http://www.eecg.toronto.edu/~enright/micro14-inter... http://www.eecg.toronto.edu/~enright/Kannan_MICRO4... https://youtu.be/G3kGSbWFig4 https://seal.ece.ucsb.edu/sites/seal.ece.ucsb.edu/... https://www.youtube.com/watch?v=d3RVwLa3EmM&t=...
Intel seems to now be using ideas from everywhere else in the industry, while also using 22nm fabs that would have been slated for decommission if not for the 10nm fiasco that is their process, which they had to push certain chipsets to due to the 14nm shortage, meaning they need to keep the fab time full to justify them keeping the lights on and a 22nm active interposer fits the bill. The article practically writes itself.
iwod - Thursday, December 13, 2018 - link
No news on opening up TB3? Which they promised to do in 2018.The Hybrid, I wish it had two HP Core. but 7W is actually the same TDP for MacBook Air Retina.
The_Assimilator - Thursday, December 13, 2018 - link
Thunderbolt is dead in mainstream PCs at this point, because there's no use-case in which it outperforms USB by enough to justify its cost (both of implementation and in the peripherals that people actually want to use). It's become another almost-Mac-exclusive like Firewire, and will share the same fate.gamerk2 - Thursday, December 13, 2018 - link
Yep, this pretty much the same thing that happened to Firewire. Thunderbolt never had a reason to exist; USB3 handles pretty much every TB3 use-case.iwod - Thursday, December 13, 2018 - link
Except external GPU.Icehawk - Thursday, December 13, 2018 - link
The number of those sold can be counted on one hand. Ok, maybe two,In all my years I've used a FW cable exactly once.
I've seen a total of two thunderbolt cables, our Mac guy had them for some crap he was doing. IOW, dead just like FW. Proprietary stuff just doesn't tend to last long in an open, commodity, market when equivalents are available for free or pennies on the dollar.
dampf - Wednesday, January 2, 2019 - link
Because nobody knows about it. I'm sure there is a big market for that, just connect a GPU to your laptop via thunderbolt 3 and enjoy high end gaming!johannesburgel - Thursday, December 13, 2018 - link
Nonsense, it's exactly the other way around. Everything is converging on PCI Express. Every peripheral which can actually profit from USB3 speeds would be better off with a direct PCI Express connection instead, and Thunderbolt can do that. USB3 on the other hand requires protocol conversion on both ends of the cable. Instead of being able to just plug a normal GPU, an off-the-shelf NVMe SSD or any other standard device into an external enclosure, without any hardware or driver adaptions, people have to come up with USB3 interface chips and special USB variants of every device.USB3 never had a reason to exist.
HStewart - Thursday, December 13, 2018 - link
I guess you have never truly experience Thunderbolt - it has 40G rate compare USB 3.0 and currently drives a doc - powering multiple monitors plus multiple devices - also remember that it is a superset of USB C Gen 2.Not to mention it has external GPU via PCIe.
HStewart - Thursday, December 13, 2018 - link
TB3 is open up - I believe I saw some foreign TB3 controller out there - but there maybe support issue -Gc - Thursday, December 13, 2018 - link
_eX
looks like it stands for x'ellerator, for many hardware accelerator markets, not just gaming.
Visually, it looks like a tuxedoed figure skater X holding up Microsoft's 'e'.
Samus - Thursday, December 13, 2018 - link
“a customer asked for a product of about this performance but with a 2 mW standby power state. In order to do this, Intel created and enhanced a number of technologies inside the company. The final product is apparently ideal for the customer, however the chip will also be made available for other OEMs.”Clearly Apple.
StrainedDig4 - Thursday, December 13, 2018 - link
Is there a reason when I expand the text screen your video Ads pop up onto the screen, refuse to quit and obscure 1/4 of the display? Are your readers displeasure less important than advertising dollars? I don’t read your articles to buy things, I have Amazon for that, I read to become more informed. Kindly let me know why your pages have become Tomshardware.com lookalikes...ajc9988 - Thursday, December 13, 2018 - link
advertising dollars keep the lights on and pay the salaries of the authors of these articles. Without ads, you don't get to read the content for pleasure, and then it all is a moot point, now isn't it. With that said, could you give constructive criticism on how they could make it less intrusive while still accomplishing the same goal?StrainedDig4 - Thursday, December 13, 2018 - link
I cannot read a thing when the text is partially blocked by a video Ad. I will simply read elsewhere. They need add dollars and that is more important than my reading their article, obviously. Thanks for your input.davidefreeman - Friday, December 14, 2018 - link
I've never had that problem before, and I keep ads unblocked on Anandtech to support them. Did you try refreshing the page?FunBunny2 - Friday, December 14, 2018 - link
"Without ads, you don't get to read the content for pleasure, and then it all is a moot point, now isn't it."the 'advert business model' is a crock and scam. always has been. at least in print media, everyone who 'clicks' on the newspaper or magazine has to pass over, at least, the advert. with the innterTubes, the adverts not only steal more and more of the screen, but definitely slow down the process.
and here's what the proponents of the advert business model keep denying: those that block adverts won't interact with them at all. neither the page owner nor the advert buyer actually lose a 'real' customer. the page owners are scamming the advert buyers with 'eyeballs' metric. it means nothing but the buyers are too stupid, or scared, to insist on metrics that measure actual $$$ they earn from clicks. the page owners won't do that because it will reveal that innterTube adverts are of negative infinity value.
FunBunny2 - Thursday, December 13, 2018 - link
I wonder how much longer the 'if we build it, they will buy' mantra works? do all those cube dwellers in MegaCorpUSA play high requirement games all day long?tommybobberson - Friday, December 14, 2018 - link
could someone tell me what they mean by IP in the first qna question please?ajc9988 - Friday, December 14, 2018 - link
IP = Intellectual Property. This covers Patents, Copyright, Trademarks, and potentially trade secrets (this is things like processes, etc., that they cannot receive other protections for or that they choose not to patent because there is a time clock after which it falls into public domain, so keeping it secret can potentially allow for longer protection periods).THIS IS NOT LEGAL ADVICE. IF YOU NEED LEGAL ADVICE, PLEASE SEEK THE ADVICE OF COUNSEL WITHIN YOUR JURISDICTION. THIS IS GENERAL LEGAL INFORMATION, IS PRESENTED AS IS AND WITH ALL FAULTS.
tommybobberson - Friday, December 14, 2018 - link
Also, could i know what embargo they were talking about in the second last slide?ajc9988 - Friday, December 14, 2018 - link
The slide is just marked as under embargo until the contractual embargo of information is lifted. During the presentation, there was an embargo that lifted like hours or a day after the conference. Nothing to really be concerned about.tommybobberson - Friday, December 14, 2018 - link
oh i see, thanks for the reply. though I was hoping it would be something about their products :Pwatersb - Saturday, December 15, 2018 - link
Great work, lots of info. Thanks very much!peevee - Tuesday, December 18, 2018 - link
"Normally cache misses decrease by a factor of a square root of the proportional size when the cache is increased"This is neither true in most performance-critical real cases nor can provide any estimate of actual performance increase.
mikato - Friday, December 21, 2018 - link
I'm here for the "raja inside" comments. Disappointed.peevee - Sunday, December 23, 2018 - link
"although it was pointed out that these improvements won’t help everyone, and might require new algorithms in order to use specific parts of the core."Which means it will help almost no one, as very few will optimize specifically for that core.
"We’re waiting to see what changes Intel has made on the front-end, which is where a lot of low-hanging fruit often lies for performance."
Low-hanging fruit in x86 was picked up in Pentium. Since then it is just more and more kludges which cost more energy than improve performance (normalizing for node).
peevee - Sunday, December 23, 2018 - link
"64 EUs... Each EU will support seven threads as before, which means that the entire GT2 design will essentially have 512 concurrent pipelines."Math?
And are these threads? Or ALUs?
peevee - Sunday, December 23, 2018 - link
"The 7-Zip demo was relatively straight forward, showing how the new instructions such as Vector-AES and SHA-NI in Sunny Cove can give the processor a 75% boost in performance over an equivalent Skylake based platform at iso-frequency."Huh? Have they recompiled (what compiler supports the new instructions then), or manually wrote a codepath in asm? And enabled encryption so to get any increase, so the increase is not actually for compression? Have they disabled compression too? ;)
dampf - Wednesday, January 2, 2019 - link
Really Intel? Adding AI improvements to Core architecture in 2021? Smartphone vendors were doing it last year... way too late. And 5G will take off in the end of 2019.TheJian - Wednesday, January 2, 2019 - link
I guess I'm not getting why I should be impressed by this.https://www.electronicsweekly.com/news/design/comm...
Leti already did it? They say it's IP can be used by others, so is this Intel's solution (what they're using I mean)?
AMD already does chiplets, everyone does socs (Intel failed them)...etc. 144mm^2 not that small (about an large apple soc size). Current 7nm A12 is 83mm^2 with 6.9B transistors and two big cores, 4 small. AMD already did interposer/chiplets. Memory has been stacking for a while now. Not sure what is supposed to impress me here.
"Very much like a mobile chip" ...Pretty much...Again, why so impressed?
And as OP noted, you have no idea how big the market is, nor how much they can make on them. I think they have to try to sell some before we can say that (many Intel things killed over the years), as their last mobile strategy cost them 16B+ in giveaways, and lost the fab race for a while (maybe forever, because that 16B lost should have went DIRECTLY into fabs and 10nm wouldn't be crap now), as once 7nm Intel hits, it looks like TSMC beats them anyway with 5nm (ok, tie? whatever). My point here is Intel's 7nm won't be much ahead of tsmc 5nm if at all as that is what it will compete with since tapeouts happen q2 2019 and chips 12-15 months later.
https://www.extremetech.com/computing/278742-tsmc-...
Many other articles out there like this, but has a good chart of when and how much wafers etc. But if risk production is really as they say, 5nm chips by xmas 2020. That puts Intel where with this @7nm? Unless that answer is XMAS 2020, I'm thinking behind tsmc. It looks like TSMC is aiming before xmas and they've been moving at a good clip without many glitches recently, so Intel better get busy IMHO. TSMC is 2q 2019 risk, or 2H 2019 depending on who you believe I guess. But still, Intel 7nm better hit by xmas 2020 then right?
Comments on last page: Uh, should have bought NV under $10 but couldn't take the best from gpu side because nobody could handle Jen as president :) WOW, look at that value you passed up Intel, oh, and you'd RULE mobile by now with all those tegras being on Intel's process 5+yrs ago (never mind what gpus would have done on Intel during this time) and you already had the modem solution too (NV bought one, and had to kill it, intel would have taken over everything cpu/gpu/modem/mobile).
With chromebooks, 2b mobile units not using NV gpu's etc, nobody would have stopped them at FTC since more gpus, and arguably more computing devices ship without WINTEL, Intel's gpus (even with NV in there) etc. Intel gpus wouldn't have been needed, mobile wouldn't have been lost (14nm Intel NV socs would have competed well against 20nm everyone else, same story before 14/20, Intel 22nm NV socs vs. 28nm everyone else)., fab money wouldn't have been blown on mobile etc etc. All the problem Intel has now are because they blew 16B on failing instead of BUYING NV for that or a bit more. They had a value back then ~6B or less 659mil shares at $10, I bought at 12...ROFL. They should have owned NV anywhere in there and all this crap wouldn't have happened...LOL. We'll see how this "ideas from outside" crap works out now. To be fair AMD had the same problems to some extent, firing Dirk for not liking mobile/tablet/apu, and wanting a KING first then that cheap crap later. Now they chase king cpu (not gpu yet so far) again...LOL. Yeah, I own AMD stock but still think management is dumb. Can't price anything right, always trying to be a friend or get share which means NOTHING if it doesn't come with MARGIN as a poor man. Sure the rich guy can flood a market, kill enemy sales, but only because he has wads of cash and can wait until he breaks you. Poor company needs NET INCOME for the next gen R&D and to retain people like KELLER etc.
I'm only in AMD stock for the 7nm server stuff, then out likely. Rumor/hype work well in advance of real product at amd (talking stock price here), so you don't likely have to wait for anything other then "shipping soon" or some leaked benchmarks etc. and the price will head to 40+ probably. Just run before that reality hits or brave the waves...LOL. I think AMD will make money, certainly has the server chips to do it, but management just seems to fail at pricing anything to take advantage while they can. Too worried about market, instead of MARGIN for R&D. I'd rather own the 10% that makes most of the money than the 80% that makes crap+a little midrange crap. Apple thinks the same, see their Q reports for ages etc. Own the rich so you can afford to supply the poor. It doesn't work the other way around generally speaking, especially as the little guy. You can't bleed as the poor little guy ;)
TheJian - Wednesday, January 2, 2019 - link
One more point, in case anyone brings it up, A12x 122mm^2 10B transistors. just adds two more big cores IIRC (maybe a few other small changes). Same point though.