In this day and age, 15 or 20 percent frequency and power scaling seems perfectly decent to me. We're at the end of the road, folks, and are getting diminishing returns on this one simple trick to improve performance. It's over.
Nearly 2x area scaling though seems like great news. More space for more cores and more parallel workloads sounds good to me.
You have to consider that there are two problems with these limitations (no frequency scaling and decrease of power usage): 1. the more transistor you put in the die more heat power you have to dissipate, and the smaller the surface the higher its density which is problem n.1 for power dissipation. THere would b no problem dissipating 1000W if the die surface were 10cm^2. But we are in mm^2 constrained domain, so putting more transistor all running at high frequency is an added problem 2. performance scaling is related to frequency scaling (or IPC) not to the number of core you have available. There's only a tiny number of algorithms and applied works that scale indefinitely with the number of available cores. Most of the algorithms can't be threaded, the few that can needs a lot of work (programming wise) and add overhead, so they do not scale linearly and having 64 cores at 1GHz do not help them going faster than a dual core at 4GHz, GPUs may be the devices that can nonetheless benefit from this transistor shrinking especially if they can limit power usage (AMD has to work a lot on this side).
However these limitations make MCM configurations more appropriate. They have all the characteristics for continuing lowering the price while being able to dissipate more power provided the communication channels are powerful enough (in terms of bandwidth and latency). MCM-GPU most critical problem is bandwidth with NUMA configuration. We already see the problem with a CPU like ThreadRipper that uses about 1/10 of the bandwidth of a GPU.
I suppose it could go either way, likely if they know it will be even better than this they are likely not to pre warn other fabs (such as Intel) so whatever secret sauce that they can pull at the last minute will allow them to have that small advantage against others (maybe not Intel but GF/Samsung for sure if anyone)
Time will tell as it always does.
I suppose as other person above said, if scaling is where should be, even if there is not an awesome clock speed improvement or whatever, it could allow more cores etc to "make up the difference" and the way things are going, it will be how many cores can they stuff per "chip" that will rule the day going forward.
they can only get so many transistors per mm2 and can only get them running so fast when doing SOI/FD-SOI/FinFet or whatever (limitation of silicon and power/temperature constraints) so instead of chasing crazy quick clock speeds they can instead engineer it to get as many small features as possible to dedicate some of the "speed" for instead "more cores" to stick within the design rules
Why is there performance scaling number for 10FF vs 16FF+ and 7FF vs 16FF+ but 7FF vs 10FF is marked with a question mark? Surely you can calculate that to be below 10%, as 16FF+ to 10FF gives 20% while 16FF+ to 7FF gives 30%.
5FF appears to be about 5 times denser than 16FF+ which is great for adding more cores. But as someone else pointed out the power density will be an issue for desktop/server although maybe not for mobile.
Or at least we would be, if 0.1nm weren't about the diameter of hydrogen atom, and individual hydrogen atoms make pretty lousy insulators and semiconductors.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
10 Comments
Back to Article
ZolaIII - Tuesday, May 8, 2018 - link
You could had at least change a title from EETimes. You know there's people hire who do read D&R. This will even be funny to Realjjj.prisonerX - Tuesday, May 8, 2018 - link
In this day and age, 15 or 20 percent frequency and power scaling seems perfectly decent to me. We're at the end of the road, folks, and are getting diminishing returns on this one simple trick to improve performance. It's over.Nearly 2x area scaling though seems like great news. More space for more cores and more parallel workloads sounds good to me.
CiccioB - Wednesday, May 9, 2018 - link
You have to consider that there are two problems with these limitations (no frequency scaling and decrease of power usage):1. the more transistor you put in the die more heat power you have to dissipate, and the smaller the surface the higher its density which is problem n.1 for power dissipation. THere would b no problem dissipating 1000W if the die surface were 10cm^2. But we are in mm^2 constrained domain, so putting more transistor all running at high frequency is an added problem
2. performance scaling is related to frequency scaling (or IPC) not to the number of core you have available. There's only a tiny number of algorithms and applied works that scale indefinitely with the number of available cores. Most of the algorithms can't be threaded, the few that can needs a lot of work (programming wise) and add overhead, so they do not scale linearly and having 64 cores at 1GHz do not help them going faster than a dual core at 4GHz,
GPUs may be the devices that can nonetheless benefit from this transistor shrinking especially if they can limit power usage (AMD has to work a lot on this side).
However these limitations make MCM configurations more appropriate. They have all the characteristics for continuing lowering the price while being able to dissipate more power provided the communication channels are powerful enough (in terms of bandwidth and latency).
MCM-GPU most critical problem is bandwidth with NUMA configuration. We already see the problem with a CPU like ThreadRipper that uses about 1/10 of the bandwidth of a GPU.
peevee - Tuesday, May 8, 2018 - link
Don't forget that it is their best estimates before reality kicks in.Dragonstongue - Tuesday, May 8, 2018 - link
or is an educated guesstimate ^.^I suppose it could go either way, likely if they know it will be even better than this they are likely not to pre warn other fabs (such as Intel) so whatever secret sauce that they can pull at the last minute will allow them to have that small advantage against others (maybe not Intel but GF/Samsung for sure if anyone)
Time will tell as it always does.
I suppose as other person above said, if scaling is where should be, even if there is not an awesome clock speed improvement or whatever, it could allow more cores etc to "make up the difference" and the way things are going, it will be how many cores can they stuff per "chip" that will rule the day going forward.
they can only get so many transistors per mm2 and can only get them running so fast when doing SOI/FD-SOI/FinFet or whatever (limitation of silicon and power/temperature constraints) so instead of chasing crazy quick clock speeds they can instead engineer it to get as many small features as possible to dedicate some of the "speed" for instead "more cores" to stick within the design rules
something along those lines anyways.
MobiusPizza - Tuesday, May 8, 2018 - link
Why is there performance scaling number for 10FF vs 16FF+ and 7FF vs 16FF+ but 7FF vs 10FF is marked with a question mark? Surely you can calculate that to be below 10%, as 16FF+ to 10FF gives 20% while 16FF+ to 7FF gives 30%.smilingcrow - Wednesday, May 9, 2018 - link
5FF appears to be about 5 times denser than 16FF+ which is great for adding more cores. But as someone else pointed out the power density will be an issue for desktop/server although maybe not for mobile.Opencg - Thursday, May 10, 2018 - link
If we continue at this rate we will be at .1nm by 2030 AND we will have a self sustaining mars base thanks to elon musk.benzosaurus - Sunday, May 13, 2018 - link
Or at least we would be, if 0.1nm weren't about the diameter of hydrogen atom, and individual hydrogen atoms make pretty lousy insulators and semiconductors.0ldman79 - Friday, October 5, 2018 - link
Yep. We're getting to the end of the current growth of lithography.They're going to have to approach the problem differently very soon.