Processes that deposit single atomic layers are already used commonly in processes - even in 28nm.....For some layers, atomic layer deposition is the only way to go. The complexity increased here because we are now increasing the number of layers and devices that need to be defined at those scales.
Why not? IBM have already demoed atomic assembly. The downside - it is very slow, it is one atom at a time VS etching septillions of atoms with acid at the same time.
BTW I finally get how they will get to 5 nm - by lying about it. How much of a 10nm chip's features are at 10nm resolution? Not many. Area decrease is already falling behind the process scale number and it is only going to get worse.
On the upside - no biggie - we already have enough performance to run terminators. So our extinction is well assured.
Don't you go worrying about terminators. I've already started work on the first of many. I was just so sick of not being able to get a seat on the bus. Was thinking no one wants to sit next to a cybernetic killing machine so I can send that to the earliest bus stop, get it to reserve a seat and I can ride to work without the smell of unwashed, practically rotting human being next to me.
Far safer than one of those self driving car things. Bloody death traps.
The node size is mostly marketing now which is why Intel went out of their way to define some new metrics ~6 weeks ago. While I wasn't a fan of that marketing spiel, there is a point that there needs to be a new metric as traditional node shrinks are few and far between going forward.
What I think the foundries are waiting on is a new big break through as they realize that they cannot currently continue on the existing path indefinitely. Germanium can come in as an exotic material as a substitute for silicon but wafer prices are extremely expensive. Even then, germanium doesn't even solve the node problem but rather just provides better material properties at existing nodes. Carbon nanotubes and graphene are two related materials seen as potential for replacing silicon as we get even closer to the atomic level. Both have some good properties for circuit design but no one has found a means of economical mass production.
Both Intel and IBM has invested heavily into silicon photonics. So far their efforts have lead to advancements in IO but not raw processing but optical logic gates do exist. Much like other exotic solutions, these suffer from mass production problems to bring them out of the research lab. (Notice a trend starting here?)
I think strategies like interposers and EMIB are emerging to side step the absolute need for shrinks in the sense of limiting transistor counts. Granted interposers/EMIB do nothing with regards to power consumption. The one nice thing about these techniques is that they do potentially allow for mixing some of the more exotic solutions with bulk processes. For example, a die with slicon photonics could interface with some high speed optical circuits in the package and also interfaces with more traditional bulk processes for its SRAM cache. Very expensive but worth considering when there are other new node alternatives available. Granted, such choices are not going to happen tomorrow but they're clearly on the horizon.
My read is that the first application of nanotubes or graphene will be laying down a copper layer, then growing graphene on top of it. The trick will be to get the graphene to align on top of the copper, which will probably take another layer in between, perhaps silver. Could silver be substituted for copper in bulk? Good question. It is a better conductor and solves the alignment problem.
You may think of silver as a precious metal along with gold and platinum, but over fifty per cent of the silver mined goes into silver solder for brazing or soldering metals together. Most silver solder is used for brazing, go figure. Silver is also used in thermal compounds for getting a good seal between a CPU chip and the heat sink. Obviously replacing a few grams of copper with silver inside the chip won't raise prices significantly.
Getting copper to bond to the graphene is not a problem--even if the reverse is a significant problem. However high-temperature processes may damage the graphene. Best is probably a "wet" process to put a thin layer of copper on the graphene before building the next litho layer. Putting the graphene in a copper sandwich like this should significantly improve the characteristics of the layer. This will show up as a reduced capacitance with adjacent conducting traces--less cross-talk and faster signal propagation.
One thing to consider, though, is that when atoms are bound into molecules via covalent bonds, the distances between their nuclei shrink below the sum of the adjacent atoms' stand-alone radii: in other words, chemically bound atoms pack together much more tightly than one might naively expect by conceptualizing each atom as a solid sphere...
Indeed. The x nm labels are meaningless now; they may as well call them Bob and Joan.
The only way to compare them is via the inter-node PPA change metrics. Anyone have numbers for 22->14 and 14->10 from Intel to hand?
Even then I know Intel's 14 nm is better on at least power and performance than others' 14/16 nm, as the latter are actually 20 nm with FinFET added, but I'm not aware of any meaningful way of comparing them.
I'm not calling you wrong or anything, but can you source that? Intel's original 14nm might have had bad yields for a while, but I imagine it's difficult to compare outright performance without published numbers, given that Intel's 14nm went into CPUs with a frequency of 800MHz to 4.5 GHz, versus TSMC's biggest wins being Apple and GPUs, none of which went past the low 2GHzs. Obviously it's difficult to compare performance on frequency with something like that.
"Industry “10 nm” technologies are expected to ship sometime in 2017 and have similar density to Intel’s 14 nm technology that has been shipping since 2014."
The first iteration of 14nm was Broadfield and Broadfield did not clock to 4.5GHz. Also, you can't compare to a smartphone SoC, which have to keep within a very small power envelope.
Regarding the 10LPE vs 14LP*, I am not sure because we have two statements that contradict each other from Samsung.
They stated the following in October:
"Samsung’s new 10nm FinFET process (10LPE) adopts an advanced 3D transistor structure with additional enhancements in both process technology and design enablement compared to its 14nm predecessor, allowing up to 30-percent increase in area efficiency with 27-percent higher performance or 40-percent lower power consumption."
But if you look at the picture (from August) there (http://images.anandtech.com/doci/10765/dac.png), they mentioned ~30% performance increase at the same leakage power, which can considered as 27%... But if you happen to see some more up to date slides from Samsung, please let me know.
If they had anywhere close to 27% over 14LPP , they would have more design wins so it's safer to assume that "predecessor" means LPE. The phrasing itself is iffy, why "compared to ïts 14nm predecessor" and not just "compared to 14nm" - corporations are tricky like that.
Hong Hao, senior vice president of the foundry business at Samsung Semiconductor "10nm brings a lot of benefits to our customers in terms of area scaling, performance and power or PPA. So overall, the PPA improvements are very substantial compared 14nm. We have compared that in terms of the performance, area and power to 14nm LPE. 14nm LPE is our first-generation finFET technology. We see up to a 30% area reduction with a 27% performance improvement or 40% lower power at the same performance." http://semiengineering.com/to-10nm-and-beyond/
Oh, I don't know. It's acknowledged that Intel's current 14nm process is equivelant to other's 10nm processes, and likely their 10nm will be equivelant to other's 7nm.
I don't think Intel has anything to,worry about for the next few years. I still doubt that 5nm will come about, at least, not as a real 5nm process, though it will likely be advertised as such.
But when that wall is reached, for everyone, then, long last, Intel will lose most of its process advantages. But that will be in 5 to 8 years, so there's still a long way to,go.
I reckon we'll get real 5 nm, probably with quad patterning, possibly with a new transistor design, in around 2023-25. Difficult to see where we can go after that. Maybe that graphene stuff I suppose.
A lot of chip experts don't believe that a true 5nm is possible. Not because we can't build it, but because the laws of physics are closing in. At that point, we have no substitute for FinFET, which doesn't work below 7 nm, and the three technologies that have been considered as a replacement aren't working either.
When you begin to have features that are just 10 to 12 atoms wide, Heisenberg's Law hits you hard. As many electrons that travel through the feature, escape it. That's a death hell. So I expect that 7nm will really be 10 to 14 for most fabs, the way 14 is really 16 to 20 for most now.
The next step is expected to be carbon nanotubes, which both Hp and ibm have been working for years, and have shown limited success. That hoped to be ready, in limited complexity, by 2005 to 2030.
But there will be a wide gap between any silicon technology and that, even assuming they can get it working on a commercial basis at all. There are still too many steps for that, and they don't yet know how to climb them, or even if they're there.
You're right, but I think between EUV and further development of the new gate concepts we'll make 5 nm happen. Although as really it's a question of whether commercial interests will fund the R&D, rather can 'science' make it happen, I suppose there's a risk 5 nm won't happen as designing such chips will be fantastically expensive. Will we be prepared to spend the $$$ for the performance which would be delivered?
Intel already has lost it's process advantage. Samsung's 10nm is currently in HVM and denser than Intel's 14nm. Intel say they will launch 10nm in 2017, but the yields are so bad they can hardly be considered production yield. By the time it reaches production yield TSMC will have 7nm
Intel's 10nm is going to be denser than Samsung or TSMC's 7nm imo, going by the numbers we see here. Intel's 14nm is already denser than their competition by somewhere in the realm of 30% (per the hard numbers Intel released a few weeks back, and nobody has contradicted them). Intel's jump to 10nm is going to provide ~2.7x higher density than their 14nm node, and I think they've said several times they plan to ship 10nm this year.
Even with a 70% area reduction on 7nm vs 16nm at TSMC, I don't think that overcomes a 30% lead + a 2.7x increase in density on top of that lead.
For another comparison, Intel's 10nm measures 100M transistors / mm^2, versus their competition at 50M / mm^2 at 10nm. Assuming TSMC's transistor density is around the "Others" metric, a 37% reduction in area from 10nm to 7nm would still leave them short of Intel's process node. I suspect everybody else will need a 5nm node to temporarily jump ahead of Intel's 10nm, before Intel's 7nm rolls around in 2020 or something and puts everybody behind again.
Numbers come from https://newsroom.intel.com/newsroom/wp-content/upl... which based on most of what I've seen has been accepted as a well done report. I'd love to see everybody switch to a more objective metric, since process node is now just a marketing game.
Intel's 14nm density advantage compared to GF/Samsung's 14nm process is only 23% not 30%. Also Intel's recent presentation only compares up to their competitor's 10nm not 7nm. In 2018 Intel will lose it transistor density advantage.
I know Intel compared to the 10nm products; I was just extrapolating from that based on what TSMC stated (in the article) about their 7nm vs their own 16nm and 10nm. With everybody using different statistics now for each of their parts, it's not surprising that Intel gets passed every now and then, considering the time between their nodes is getting longer and longer.
Scotten Jones did a detailed analysis of various leading edge nodes and concluded that TSMC's 7nm is slightly denser than Intel's 10nm and Samsun/TSMC 10nm is slightly desnser than Intel's 14nm: https://www.semiwiki.com/forum/content/6713-14nm-1... Intel's 2017 launch of 10nm is virtually a paper launch. They are only going to release a couple of low volume SKUs at the very end of 2017, just so they can claim that they have the process lead. It's not till late 2018, or 2019 that the bulk of their products go to 10nm. Also the first iteration of 10nm performs worse than Intel's current 14nm+ process.
I don't believe it. First of all, neither he, or anyone else outside those companies actually knows enough about the actual chips to know the true density. Evaluating these by making some basic mathematical calculations doesn't tell us anything about the actual processes. It's all theoretical.
“First of all, neither he, or anyone else outside those companies actually knows enough about the actual chips to know the true density.”
That’s grossly false. Their customers (i.e. AMD, Nvidia, Qualcomm, etc.) need to have the PDKs to design their chips. The PDKs will contain the design rules and ultimately the transistor specifications necessary to design a chip for that process. TSMC will be accepting 7nm tape outs this quarter which means the transistor specifications were likely frozen some time ago. Never mind the fact that the companies have released details of their future process nodes.
I guess you also don't believe then that both TSMCs and GF's 16/14nm processes are already denser than Intel's 14nm? See eg. http://www.anandtech.com/show/11170/the-amd-zen-an... Apple A8 on 20nm was shown to be much denser than Core M on 14nm.
Whatever the marketing claims say, Intel is already behind on density in actual designs. Intel's latest 14nm process is even less dense. So what makes you think that Intel could catch up?
Well the link I provided shows it very clearly - I presume you didn't read it?
Intel may have better CPP/MP/FP/SRAM at 14nm vs TSMC/GF, but AMD still gets better L2 and L3 density despite the less advanced process. And density on real designs matters more than the process marketing numbers (which are about bragging rights, but don't tell the whole story).
Scott's analysis was actually spot on. Dick James of TechInsights actually measured Samsungs 10nm chip: https://twitter.com/Siliconicsdick/status/85632866... measuring a 68 mm contacted gate pitch, 51 nm metal pitch, dual STI and single dummy gate. That's compared to Intel's 14nm of 70 nm CPP x 52 nm MMP
By comparison: Intel's 14nm is 70 nm CPP x 52 nm MMP.
Seems to me past 2025 or so what are they going to do to compete assuming you hit that ~5nm and then iterate a few times on that process to maximize it's potential? You can't really go any further without some major physics breakthrough. it's kinda a race to the bottom just in a physics sense instead of the financial price slashing sense.
Most of the work has been with carbon nanotubes, with both IBM and Hp showing progress. But it's not expected to go commercial (if ever, really) before the mid 2020s, or possibly (more likely), the later part of the decade.
So there will be a gap. Software developers will need to improve their software to improve performance finally, which should be a big benefit for everything.
Regular hardware speed increases have allowed more software to be produced across a broader range of products more quickly because developers don't have to spend their time optimizing for performance as much because the hardware gets them to a 'good enough' place easily. Once the hardware is not getting faster, for every problem that requires greater performance, you simply shift more of the cost of creating the product to the software side. It will take longer to produce programs as a result. There is no free lunch; it's not like software development could have for no extra cost added more performance and now that hardware stops increasing in speed software development will just start adding that free performance in. The cost of producing software will just go up for that segment of the software market that is performance sensitive. Of course, quite a lot of the market is not performance sensitive so there will be little appreciable impact on most of the software market.
You're assuming that there are real solutions that could revolutionize software performance and scalability, just like the P versus NP problem we might never get an answer to that.
Yeah it's called "Assembly." We'll have come full circle. In the beginning it was assembly because processors were so slow. And it appears the end it will also be Assembly as processor power stalls. Kind of fitting. I used to program in Assembly on my Atari 800 back in 1982.
Only the first 2 RCT games were wwritten in assembly and both had only 2d graphics so it was mostly game logic which doesn't take much code. So yes, I would call that a mid-sized project.
I once wrote an entire word processor in 68K, it worked very well (students ended up using it instead of the Uni-supplied program). People make false assumptions about coding in assembly. Beyond a certain point in complexity, its use becomes more like a high level language, ie. setting parameters and calling procedures & functions. Just the natural way one solves problems in a structured manner brings this about. Assembler doesn't inherantly lend itself to structured programming, but it doesn't have to; it's not hard to use it in a way that makes up for such issues, ie. a long as the design process itself is structured. I found it to be the best of both worlds, getting at the raw metal but also being able to focus on higher level design issues. Easily the most fun project I ever worked on, and the largest printed listing the uni in question had ever received at the time. :D (took 2.5 hours to print out)
Uh, no. A compiler like GCC or LLVM will beat hand coded assembly every time on modern processors, without fail, unless you're talking about tiny or specialized code (say, bootloaders).
The mistake you're making could be characterised as "premature optimisation" on a grand scale. You think if you tweak every bit of code and write it in assembly you'll get something more efficient. Sorry to break it to you, but you won't. Good structure (this includes choice of data structures and algorithms) is a greater influence on code than tweaking, if you're talking about anything of a reasonable and practical size.
The general rule of thumb is that hand coded assembly will be used in critical loops to accelerate portions of code that compilers like GCC or LLVM produce.
You're not wrong about data structures and algorithm choice but in the light of smart decisions there, assembly is the next level of optimization. Assembler can't fix GIGO.
I doubt that assembly will come into vogue, but a whole new generation of compiled languages are appearing which are designed for speed and low resource consumption: Rust, Julia, Swift, Go and NIM. These languages have performance which is slightly slower than C, but without its security problems, so I expect them to be widely used in the future as the hardware stops getting faster.
Well, there's the question of whether we can get a revolutionary computer architecture these days. It not easy. So we're still looking at cisc vs risc. Maybe we need to go more risc and less cisc. Pretty much everything is some combo of the two.
No revolution, but technology has been and will continue advancing here for a long time. Higher-level languages and object-oriented programming making larger projects possible. Evolution in UI frameworks and asynchronous programming (cutting edge in mobile frameworks). Hardware virtualization, network definition migrating to software, and environment encapsulation (e.g., Docker), simplifying resource management. Frameworks like OpenGL, EX, Metal bridging the gap between graphics and graphics hardware. Libraries like Caffe and TensorFlow doing the same for neural networks and learning software.
Also engineering tools and techniques. Distributed version control systems for source code management. Suites like dpkg or maven for handling dependencies. Team and process practices like the family of Agile techniques.
The sophistication and sheer amount and ubiquity of software applications in our lives today depends just as much on all of these things as on faster, lower power hardware.
Once we reach the absolute quantum limits of 2d scaling, we will be looking at alternative materials (graphene, nanotubes, diamond, III-IV chemistry, etc) for better power and frequency scaling. At the same time, 3D stacking of 2D layers by the dozens, then hundreds, then thousands. At the same time, advanced heat dissipation tech (graphene/nanotubes/diamond could serve double duty there), as well as (at least for non-portable devices) refrigeration not just for overclocking but for normal operation. Maybe even look into superconducting chips/interconnects using high-Tc materials, immersed in liquid Nitrogen... There's also research into molecular computing. And, of course, you can always trade off generality against special purpose accelerator ASICs that can provide many-orders-of-magnitude speedup vs conventional processors on same node in specific tasks: and the more compact the node, the more of these various narrow-use circuits you can affordably cram onto a single chip...
Sure, there are a lot of technologies out there. But most are just impractical, or just too expensive, and complex. We've has liquid cooling for some time, but do most people really want that? What about notebooks? Can't really be done.
Other technologies have been considered for a couple of decades but as so expensive that envelope mainframe CPUs can't use them.
Most of these technologies can be used for every high end use, because of expense, effectiveness, and even power draw. But that's just for the top 0.1% of computing. What about the rest of us?
You mean like when they said that the physics of light would prevent any geometries less then 193nm? Sorry, but the "wall" that was going to end CPU density increases has been broken so many times, that I won't believe it till I see it. Of course just because all those predictions where wrong doesn't mean yours is. Cheers.
"14/16nm": Intel ~13.4nm - from Broadwell to Coffee Lake, Atom x5/x7 Samsung/GFo ~16.6nm - AMD Zen and Rx400/500, nVidia 1050, SD 620/820, Exynos 7/8, Apple A9 TSMC ~18.3nm - nVidia 1060+, Apple A9/10
"10nm": Intel ~9.5nm - Cannonlake TSMC ~11.3nm - Helio X30, Kirin 970, Apple A10X Samsung ~12.0nm - SD835, Exynos 9
Thank you for the corrections. You are right about the 10LPP, they made and appropriate announcement a couple of weeks ago, but somehow I've missed it. Fixed.
Regarding the 10LPE vs 14LP*, I am not sure.
They state the following:
"Samsung’s new 10nm FinFET process (10LPE) adopts an advanced 3D transistor structure with additional enhancements in both process technology and design enablement compared to its 14nm predecessor, allowing up to 30-percent increase in area efficiency with 27-percent higher performance or 40-percent lower power consumption."
If you look at the picture there (http://images.anandtech.com/doci/10765/dac.png), they mention ~30% performance increase at the same leakage power, which can considered as 27%... But if you happen to see some more up to date slides from Samsung, please link them.
As for the 10LPU, I guess, they are going to make an announcement in late May.
I know AMD will be using GF 7nm for their GPUs after Vega 10/11, but I wonder what NVIDIA will be using after this current Pascal generation. Does anyone have any clues?
Apple has moved away from that model. I doubt they wanted to do it, but neither Samsung nor TSMC could produce all the SoCs they needed that year, so they had to.
It's also interesting to note that while Apple had to tune their designs to both processes, the TSMC 16nm was 20% more efficient than the Samsung 14nm process. We saw results of those tests either here or on arstechnica, I don't remember which now. But the total device efficiency advantage was under 5% once everything was taken together.
But still, it shows that we can't go by theory when extrapolating these supposed numbers to the real world. I'd still rather see Apple on intel.
I would be extremely surprised if it was anyone except TSMC. Especially since TSMC has just announced 10nm is ready for H2 this year - which, not coincidentally, is when NVIDIA is rumoured to drop the first Volta products.
The only GPU that NVIDIA has ever sourced from a company other than TSMC is GP107 from Samsung at 14nm. Even though Sammy's 14nm node is worse than TSMC's 16nm, GP107 is such a (relatively) small and simple chip that it didn't really matter. We'll probably see a similar story with Volta: TSMC gets the big Pascals, Samsung gets the small ones.
There is, of course, always the possibility that NVIDIA will stick with the now-mature (and cheaper) 16nm for Volta - I imagine it will depend on whether Volta is more (10nm) or less (16nm) powerful clock-for-clock compared to Pascal.
nVidia has been flirting with Samsung of late. I doubt they'd just exclusively to Samsung but they'll likely continue to have small/medium chip there as a testing vehicle if they need to quickly switch their entire line up over.
I wish Anton would work more on his writing or get a better editor. He writes English so mechanically and itis painfully obvious English is nowhere near his first tongue. I used to follow him at Xbitlabs. Great content but poor writing! sorry.
The "nodes" are just pure lies at this point. 45nm doubled gate density of 65nm, as expected. 10nm chips should have 20 times more transistors per area unit compared to 45nm. They are not even close. And there is nothing which is really 10nm, even a feature as simple as metal pitch is 40-50nm.
The NBA live mobile Online additionally help gamers earn coins that are easy and quick. Since the primary resources in the game is coins, players that have use of unlimited coins have better opportunity to win the game as well as at a short time that is considerable. Source : http://ow.ly/AHwi30aK5bD
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
89 Comments
Back to Article
Demon-Xanth - Friday, May 5, 2017 - link
A silicon atom's width is about 110pm, so they are literally going into widths that are only double digit numbers of atoms wide.MananDedhia - Friday, May 5, 2017 - link
Processes that deposit single atomic layers are already used commonly in processes - even in 28nm.....For some layers, atomic layer deposition is the only way to go. The complexity increased here because we are now increasing the number of layers and devices that need to be defined at those scales.bug77 - Friday, May 5, 2017 - link
Yes, but you can't take 3 atoms, call then source, drain and gate and assemble them into a transistor.ddriver - Friday, May 5, 2017 - link
Why not? IBM have already demoed atomic assembly. The downside - it is very slow, it is one atom at a time VS etching septillions of atoms with acid at the same time.BTW I finally get how they will get to 5 nm - by lying about it. How much of a 10nm chip's features are at 10nm resolution? Not many. Area decrease is already falling behind the process scale number and it is only going to get worse.
On the upside - no biggie - we already have enough performance to run terminators. So our extinction is well assured.
philehidiot - Friday, May 5, 2017 - link
Don't you go worrying about terminators. I've already started work on the first of many. I was just so sick of not being able to get a seat on the bus. Was thinking no one wants to sit next to a cybernetic killing machine so I can send that to the earliest bus stop, get it to reserve a seat and I can ride to work without the smell of unwashed, practically rotting human being next to me.Far safer than one of those self driving car things. Bloody death traps.
Kevin G - Sunday, May 7, 2017 - link
Are you Sarah Conner?Xajel - Sunday, May 7, 2017 - link
Yeah they're actually moving these actual atoms... atom by atom like a legoKevin G - Sunday, May 7, 2017 - link
The node size is mostly marketing now which is why Intel went out of their way to define some new metrics ~6 weeks ago. While I wasn't a fan of that marketing spiel, there is a point that there needs to be a new metric as traditional node shrinks are few and far between going forward.What I think the foundries are waiting on is a new big break through as they realize that they cannot currently continue on the existing path indefinitely. Germanium can come in as an exotic material as a substitute for silicon but wafer prices are extremely expensive. Even then, germanium doesn't even solve the node problem but rather just provides better material properties at existing nodes. Carbon nanotubes and graphene are two related materials seen as potential for replacing silicon as we get even closer to the atomic level. Both have some good properties for circuit design but no one has found a means of economical mass production.
Both Intel and IBM has invested heavily into silicon photonics. So far their efforts have lead to advancements in IO but not raw processing but optical logic gates do exist. Much like other exotic solutions, these suffer from mass production problems to bring them out of the research lab. (Notice a trend starting here?)
I think strategies like interposers and EMIB are emerging to side step the absolute need for shrinks in the sense of limiting transistor counts. Granted interposers/EMIB do nothing with regards to power consumption. The one nice thing about these techniques is that they do potentially allow for mixing some of the more exotic solutions with bulk processes. For example, a die with slicon photonics could interface with some high speed optical circuits in the package and also interfaces with more traditional bulk processes for its SRAM cache. Very expensive but worth considering when there are other new node alternatives available. Granted, such choices are not going to happen tomorrow but they're clearly on the horizon.
eachus - Sunday, October 1, 2017 - link
My read is that the first application of nanotubes or graphene will be laying down a copper layer, then growing graphene on top of it. The trick will be to get the graphene to align on top of the copper, which will probably take another layer in between, perhaps silver. Could silver be substituted for copper in bulk? Good question. It is a better conductor and solves the alignment problem.You may think of silver as a precious metal along with gold and platinum, but over fifty per cent of the silver mined goes into silver solder for brazing or soldering metals together. Most silver solder is used for brazing, go figure. Silver is also used in thermal compounds for getting a good seal between a CPU chip and the heat sink. Obviously replacing a few grams of copper with silver inside the chip won't raise prices significantly.
Getting copper to bond to the graphene is not a problem--even if the reverse is a significant problem. However high-temperature processes may damage the graphene. Best is probably a "wet" process to put a thin layer of copper on the graphene before building the next litho layer. Putting the graphene in a copper sandwich like this should significantly improve the characteristics of the layer. This will show up as a reduced capacitance with adjacent conducting traces--less cross-talk and faster signal propagation.
beginner99 - Monday, May 8, 2017 - link
"BTW I finally get how they will get to 5 nm - by lying about it"Process tech numbering hasn't been about feature size for the past 2 decades.
mdriftmeyer - Friday, May 5, 2017 - link
Being 63.63 times smaller in diameter for an atom compared to the fab process is quite a large span in scale.boeush - Saturday, May 6, 2017 - link
One thing to consider, though, is that when atoms are bound into molecules via covalent bonds, the distances between their nuclei shrink below the sum of the adjacent atoms' stand-alone radii: in other words, chemically bound atoms pack together much more tightly than one might naively expect by conceptualizing each atom as a solid sphere...Jon Tseng - Friday, May 5, 2017 - link
PSA: NODES WITH THE SAME (NUMERICAL) NAME FROM DIFFERENT VENDORS ARE NOT EQUIVALENT.Just wanted to get that out of the way early! :-p
Meteor2 - Friday, May 5, 2017 - link
Indeed. The x nm labels are meaningless now; they may as well call them Bob and Joan.The only way to compare them is via the inter-node PPA change metrics. Anyone have numbers for 22->14 and 14->10 from Intel to hand?
Even then I know Intel's 14 nm is better on at least power and performance than others' 14/16 nm, as the latter are actually 20 nm with FinFET added, but I'm not aware of any meaningful way of comparing them.
lefty2 - Friday, May 5, 2017 - link
That's not totally true. The first iteration of Intel's 14nm performed worse than TSMC's 16nm. 14nm+ is much better thoughDrumsticks - Friday, May 5, 2017 - link
I'm not calling you wrong or anything, but can you source that? Intel's original 14nm might have had bad yields for a while, but I imagine it's difficult to compare outright performance without published numbers, given that Intel's 14nm went into CPUs with a frequency of 800MHz to 4.5 GHz, versus TSMC's biggest wins being Apple and GPUs, none of which went past the low 2GHzs. Obviously it's difficult to compare performance on frequency with something like that.SuperMecha - Saturday, May 6, 2017 - link
See page 4. There are probably several other factors that determine performance other than leakage and drive current.https://newsroom.intel.com/newsroom/wp-content/upl...
Meteor2 - Sunday, May 7, 2017 - link
Good link, and a good quote within:"Industry “10 nm” technologies are expected to ship sometime in 2017 and have similar density to Intel’s 14 nm technology that has been shipping since 2014."
helvete - Thursday, July 20, 2017 - link
Would you expect anything else from Intel paper? (Not telling they are far from the truth)lefty2 - Saturday, May 6, 2017 - link
The first iteration of 14nm was Broadfield and Broadfield did not clock to 4.5GHz. Also, you can't compare to a smartphone SoC, which have to keep within a very small power envelope.Meteor2 - Sunday, May 7, 2017 - link
Do you mean Broadwell? But what's OC'd clock speed got to do with anything?jjj - Friday, May 5, 2017 - link
Pretty sure that the 10nm LPE perf claims are vs 14LPE not LPP as 27% higher perf is way too much.Anton Shilov - Friday, May 5, 2017 - link
Regarding the 10LPE vs 14LP*, I am not sure because we have two statements that contradict each other from Samsung.They stated the following in October:
"Samsung’s new 10nm FinFET process (10LPE) adopts an advanced 3D transistor structure with additional enhancements in both process technology and design enablement compared to its 14nm predecessor, allowing up to 30-percent increase in area efficiency with 27-percent higher performance or 40-percent lower power consumption."
http://www.anandtech.com/show/10765/samsung-10nm-m...
But if you look at the picture (from August) there (http://images.anandtech.com/doci/10765/dac.png), they mentioned ~30% performance increase at the same leakage power, which can considered as 27%... But if you happen to see some more up to date slides from Samsung, please let me know.
jjj - Saturday, May 6, 2017 - link
If they had anywhere close to 27% over 14LPP , they would have more design wins so it's safer to assume that "predecessor" means LPE. The phrasing itself is iffy, why "compared to ïts 14nm predecessor" and not just "compared to 14nm" - corporations are tricky like that.jjj - Sunday, May 7, 2017 - link
Hong Hao, senior vice president of the foundry business at Samsung Semiconductor "10nm brings a lot of benefits to our customers in terms of area scaling, performance and power or PPA. So overall, the PPA improvements are very substantial compared 14nm. We have compared that in terms of the performance, area and power to 14nm LPE. 14nm LPE is our first-generation finFET technology. We see up to a 30% area reduction with a 27% performance improvement or 40% lower power at the same performance."http://semiengineering.com/to-10nm-and-beyond/
willis936 - Friday, May 5, 2017 - link
Feynman is crying tears of joy in his grave. Intel is crying for another reason.melgross - Friday, May 5, 2017 - link
Oh, I don't know. It's acknowledged that Intel's current 14nm process is equivelant to other's 10nm processes, and likely their 10nm will be equivelant to other's 7nm.I don't think Intel has anything to,worry about for the next few years. I still doubt that 5nm will come about, at least, not as a real 5nm process, though it will likely be advertised as such.
But when that wall is reached, for everyone, then, long last, Intel will lose most of its process advantages. But that will be in 5 to 8 years, so there's still a long way to,go.
tarqsharq - Friday, May 5, 2017 - link
We'll have to see if we get another materials switch up off silicon.Some kind of graphene, maybe a photon based solution instead of electron?
Apparently quantum computing is only useful for certain types of operations, so that's not a magic bullet for speeding up all of our computing tasks.
Meteor2 - Friday, May 5, 2017 - link
I reckon we'll get real 5 nm, probably with quad patterning, possibly with a new transistor design, in around 2023-25. Difficult to see where we can go after that. Maybe that graphene stuff I suppose.vladx - Friday, May 5, 2017 - link
Nanotubes seems the most feasible solution.melgross - Saturday, May 6, 2017 - link
A lot of chip experts don't believe that a true 5nm is possible. Not because we can't build it, but because the laws of physics are closing in. At that point, we have no substitute for FinFET, which doesn't work below 7 nm, and the three technologies that have been considered as a replacement aren't working either.When you begin to have features that are just 10 to 12 atoms wide, Heisenberg's Law hits you hard. As many electrons that travel through the feature, escape it. That's a death hell. So I expect that 7nm will really be 10 to 14 for most fabs, the way 14 is really 16 to 20 for most now.
The next step is expected to be carbon nanotubes, which both Hp and ibm have been working for years, and have shown limited success. That hoped to be ready, in limited complexity, by 2005 to 2030.
But there will be a wide gap between any silicon technology and that, even assuming they can get it working on a commercial basis at all. There are still too many steps for that, and they don't yet know how to climb them, or even if they're there.
melgross - Saturday, May 6, 2017 - link
Oops, too many typos. I meant, ready by 2025 to 2030, of course.Meteor2 - Sunday, May 7, 2017 - link
You're right, but I think between EUV and further development of the new gate concepts we'll make 5 nm happen. Although as really it's a question of whether commercial interests will fund the R&D, rather can 'science' make it happen, I suppose there's a risk 5 nm won't happen as designing such chips will be fantastically expensive. Will we be prepared to spend the $$$ for the performance which would be delivered?lefty2 - Friday, May 5, 2017 - link
Intel already has lost it's process advantage. Samsung's 10nm is currently in HVM and denser than Intel's 14nm. Intel say they will launch 10nm in 2017, but the yields are so bad they can hardly be considered production yield. By the time it reaches production yield TSMC will have 7nmDrumsticks - Friday, May 5, 2017 - link
Intel's 10nm is going to be denser than Samsung or TSMC's 7nm imo, going by the numbers we see here. Intel's 14nm is already denser than their competition by somewhere in the realm of 30% (per the hard numbers Intel released a few weeks back, and nobody has contradicted them). Intel's jump to 10nm is going to provide ~2.7x higher density than their 14nm node, and I think they've said several times they plan to ship 10nm this year.Even with a 70% area reduction on 7nm vs 16nm at TSMC, I don't think that overcomes a 30% lead + a 2.7x increase in density on top of that lead.
For another comparison, Intel's 10nm measures 100M transistors / mm^2, versus their competition at 50M / mm^2 at 10nm. Assuming TSMC's transistor density is around the "Others" metric, a 37% reduction in area from 10nm to 7nm would still leave them short of Intel's process node. I suspect everybody else will need a 5nm node to temporarily jump ahead of Intel's 10nm, before Intel's 7nm rolls around in 2020 or something and puts everybody behind again.
Numbers come from https://newsroom.intel.com/newsroom/wp-content/upl... which based on most of what I've seen has been accepted as a well done report. I'd love to see everybody switch to a more objective metric, since process node is now just a marketing game.
vladx - Friday, May 5, 2017 - link
They obviously can't compete with Intel head-to-head so say they have to resort to marketing gimmicks to make it appear they're coming ahead.SuperMecha - Saturday, May 6, 2017 - link
Intel's 14nm density advantage compared to GF/Samsung's 14nm process is only 23% not 30%. Also Intel's recent presentation only compares up to their competitor's 10nm not 7nm. In 2018 Intel will lose it transistor density advantage.https://www.semiwiki.com/forum/content/6713-14nm-1...
Drumsticks - Saturday, May 6, 2017 - link
I know Intel compared to the 10nm products; I was just extrapolating from that based on what TSMC stated (in the article) about their 7nm vs their own 16nm and 10nm. With everybody using different statistics now for each of their parts, it's not surprising that Intel gets passed every now and then, considering the time between their nodes is getting longer and longer.lefty2 - Saturday, May 6, 2017 - link
Scotten Jones did a detailed analysis of various leading edge nodes and concluded that TSMC's 7nm is slightly denser than Intel's 10nm and Samsun/TSMC 10nm is slightly desnser than Intel's 14nm:https://www.semiwiki.com/forum/content/6713-14nm-1...
Intel's 2017 launch of 10nm is virtually a paper launch. They are only going to release a couple of low volume SKUs at the very end of 2017, just so they can claim that they have the process lead. It's not till late 2018, or 2019 that the bulk of their products go to 10nm. Also the first iteration of 10nm performs worse than Intel's current 14nm+ process.
melgross - Saturday, May 6, 2017 - link
I don't believe it. First of all, neither he, or anyone else outside those companies actually knows enough about the actual chips to know the true density. Evaluating these by making some basic mathematical calculations doesn't tell us anything about the actual processes. It's all theoretical.SuperMecha - Saturday, May 6, 2017 - link
“First of all, neither he, or anyone else outside those companies actually knows enough about the actual chips to know the true density.”That’s grossly false. Their customers (i.e. AMD, Nvidia, Qualcomm, etc.) need to have the PDKs to design their chips. The PDKs will contain the design rules and ultimately the transistor specifications necessary to design a chip for that process. TSMC will be accepting 7nm tape outs this quarter which means the transistor specifications were likely frozen some time ago. Never mind the fact that the companies have released details of their future process nodes.
Wilco1 - Sunday, May 7, 2017 - link
I guess you also don't believe then that both TSMCs and GF's 16/14nm processes are already denser than Intel's 14nm? See eg. http://www.anandtech.com/show/11170/the-amd-zen-an... Apple A8 on 20nm was shown to be much denser than Core M on 14nm.Whatever the marketing claims say, Intel is already behind on density in actual designs. Intel's latest 14nm process is even less dense. So what makes you think that Intel could catch up?
melgross - Wednesday, May 10, 2017 - link
Because what you're saying is wrong. I haven't read anything saying that intels's process is less dense.Wilco1 - Thursday, May 11, 2017 - link
Well the link I provided shows it very clearly - I presume you didn't read it?Intel may have better CPP/MP/FP/SRAM at 14nm vs TSMC/GF, but AMD still gets better L2 and L3 density despite the less advanced process. And density on real designs matters more than the process marketing numbers (which are about bragging rights, but don't tell the whole story).
lefty2 - Sunday, May 7, 2017 - link
Scott's analysis was actually spot on. Dick James of TechInsights actually measured Samsungs 10nm chip: https://twitter.com/Siliconicsdick/status/85632866...measuring a 68 mm contacted gate pitch, 51 nm metal pitch, dual STI and single dummy gate.
That's compared to Intel's 14nm of 70 nm CPP x 52 nm MMP
By comparison: Intel's 14nm is 70 nm CPP x 52 nm MMP.
sc14s - Friday, May 5, 2017 - link
Seems to me past 2025 or so what are they going to do to compete assuming you hit that ~5nm and then iterate a few times on that process to maximize it's potential?You can't really go any further without some major physics breakthrough. it's kinda a race to the bottom just in a physics sense instead of the financial price slashing sense.
melgross - Friday, May 5, 2017 - link
Most of the work has been with carbon nanotubes, with both IBM and Hp showing progress. But it's not expected to go commercial (if ever, really) before the mid 2020s, or possibly (more likely), the later part of the decade.So there will be a gap. Software developers will need to improve their software to improve performance finally, which should be a big benefit for everything.
bji - Friday, May 5, 2017 - link
Regular hardware speed increases have allowed more software to be produced across a broader range of products more quickly because developers don't have to spend their time optimizing for performance as much because the hardware gets them to a 'good enough' place easily. Once the hardware is not getting faster, for every problem that requires greater performance, you simply shift more of the cost of creating the product to the software side. It will take longer to produce programs as a result. There is no free lunch; it's not like software development could have for no extra cost added more performance and now that hardware stops increasing in speed software development will just start adding that free performance in. The cost of producing software will just go up for that segment of the software market that is performance sensitive. Of course, quite a lot of the market is not performance sensitive so there will be little appreciable impact on most of the software market.tarqsharq - Friday, May 5, 2017 - link
It would be quite the time to be a skilled software developer though.A next generation John Carmack? Using computing tricks to pull off things that traditionally would bog down the available hardware?
vladx - Friday, May 5, 2017 - link
You're assuming that there are real solutions that could revolutionize software performance and scalability, just like the P versus NP problem we might never get an answer to that.Hulk - Friday, May 5, 2017 - link
Yeah it's called "Assembly." We'll have come full circle. In the beginning it was assembly because processors were so slow. And it appears the end it will also be Assembly as processor power stalls. Kind of fitting. I used to program in Assembly on my Atari 800 back in 1982.vladx - Friday, May 5, 2017 - link
Not sure if serious, Assembly can work for small to medium projects, but not really big ones.patrickjp93 - Friday, May 5, 2017 - link
Roller Coaster Tycoon was programmed 100% in assembly, and that is not a medium-sized project.vladx - Friday, May 5, 2017 - link
Only the first 2 RCT games were wwritten in assembly and both had only 2d graphics so it was mostly game logic which doesn't take much code. So yes, I would call that a mid-sized project.mapesdhs - Saturday, May 6, 2017 - link
I once wrote an entire word processor in 68K, it worked very well (students ended up using it instead of the Uni-supplied program). People make false assumptions about coding in assembly. Beyond a certain point in complexity, its use becomes more like a high level language, ie. setting parameters and calling procedures & functions. Just the natural way one solves problems in a structured manner brings this about. Assembler doesn't inherantly lend itself to structured programming, but it doesn't have to; it's not hard to use it in a way that makes up for such issues, ie. a long as the design process itself is structured. I found it to be the best of both worlds, getting at the raw metal but also being able to focus on higher level design issues. Easily the most fun project I ever worked on, and the largest printed listing the uni in question had ever received at the time. :D (took 2.5 hours to print out)prisonerX - Sunday, May 7, 2017 - link
Yeah, Roller Coaster Tycoon was written in 1999, nearly 20 years ago.Welcome to the 21st century, you might want to look around, some things have changed.
prisonerX - Sunday, May 7, 2017 - link
Uh, no. A compiler like GCC or LLVM will beat hand coded assembly every time on modern processors, without fail, unless you're talking about tiny or specialized code (say, bootloaders).The mistake you're making could be characterised as "premature optimisation" on a grand scale. You think if you tweak every bit of code and write it in assembly you'll get something more efficient. Sorry to break it to you, but you won't. Good structure (this includes choice of data structures and algorithms) is a greater influence on code than tweaking, if you're talking about anything of a reasonable and practical size.
Kevin G - Sunday, May 7, 2017 - link
The general rule of thumb is that hand coded assembly will be used in critical loops to accelerate portions of code that compilers like GCC or LLVM produce.You're not wrong about data structures and algorithm choice but in the light of smart decisions there, assembly is the next level of optimization. Assembler can't fix GIGO.
amosbatto - Monday, May 21, 2018 - link
I doubt that assembly will come into vogue, but a whole new generation of compiled languages are appearing which are designed for speed and low resource consumption: Rust, Julia, Swift, Go and NIM. These languages have performance which is slightly slower than C, but without its security problems, so I expect them to be widely used in the future as the hardware stops getting faster.Meteor2 - Friday, May 5, 2017 - link
More to come from ISA developments, too. Maybe not huge amounts but definitely some -- SVE for example.vladx - Friday, May 5, 2017 - link
Yep I'm skeptical about a software development revolution, I think focusing on better computer architectures has a much better outlook.melgross - Saturday, May 6, 2017 - link
Well, there's the question of whether we can get a revolutionary computer architecture these days. It not easy. So we're still looking at cisc vs risc. Maybe we need to go more risc and less cisc. Pretty much everything is some combo of the two.ABR - Sunday, May 7, 2017 - link
No revolution, but technology has been and will continue advancing here for a long time. Higher-level languages and object-oriented programming making larger projects possible. Evolution in UI frameworks and asynchronous programming (cutting edge in mobile frameworks). Hardware virtualization, network definition migrating to software, and environment encapsulation (e.g., Docker), simplifying resource management. Frameworks like OpenGL, EX, Metal bridging the gap between graphics and graphics hardware. Libraries like Caffe and TensorFlow doing the same for neural networks and learning software.Also engineering tools and techniques. Distributed version control systems for source code management. Suites like dpkg or maven for handling dependencies. Team and process practices like the family of Agile techniques.
The sophistication and sheer amount and ubiquity of software applications in our lives today depends just as much on all of these things as on faster, lower power hardware.
boeush - Saturday, May 6, 2017 - link
Once we reach the absolute quantum limits of 2d scaling, we will be looking at alternative materials (graphene, nanotubes, diamond, III-IV chemistry, etc) for better power and frequency scaling. At the same time, 3D stacking of 2D layers by the dozens, then hundreds, then thousands. At the same time, advanced heat dissipation tech (graphene/nanotubes/diamond could serve double duty there), as well as (at least for non-portable devices) refrigeration not just for overclocking but for normal operation. Maybe even look into superconducting chips/interconnects using high-Tc materials, immersed in liquid Nitrogen... There's also research into molecular computing. And, of course, you can always trade off generality against special purpose accelerator ASICs that can provide many-orders-of-magnitude speedup vs conventional processors on same node in specific tasks: and the more compact the node, the more of these various narrow-use circuits you can affordably cram onto a single chip...melgross - Saturday, May 6, 2017 - link
Sure, there are a lot of technologies out there. But most are just impractical, or just too expensive, and complex. We've has liquid cooling for some time, but do most people really want that? What about notebooks? Can't really be done.Other technologies have been considered for a couple of decades but as so expensive that envelope mainframe CPUs can't use them.
Most of these technologies can be used for every high end use, because of expense, effectiveness, and even power draw. But that's just for the top 0.1% of computing. What about the rest of us?
ironargonaut - Monday, May 8, 2017 - link
You mean like when they said that the physics of light would prevent any geometries less then 193nm? Sorry, but the "wall" that was going to end CPU density increases has been broken so many times, that I won't believe it till I see it. Of course just because all those predictions where wrong doesn't mean yours is. Cheers.Gich - Friday, May 5, 2017 - link
Some time ago I dig up this:"14/16nm":
Intel ~13.4nm - from Broadwell to Coffee Lake, Atom x5/x7
Samsung/GFo ~16.6nm - AMD Zen and Rx400/500, nVidia 1050, SD 620/820, Exynos 7/8, Apple A9
TSMC ~18.3nm - nVidia 1060+, Apple A9/10
"10nm":
Intel ~9.5nm - Cannonlake
TSMC ~11.3nm - Helio X30, Kirin 970, Apple A10X
Samsung ~12.0nm - SD835, Exynos 9
Gich - Friday, May 5, 2017 - link
"7nm":Intel ~6.7nm
TSMC/GF ~8.2nm
Samsung ~8.4nm
smalM - Monday, May 8, 2017 - link
TSMC ~18.3nm - that's 16FF which was never used for mass production but is always used by Intel for comparison...helvete - Thursday, July 20, 2017 - link
Intel paper, intel's point of view.Lodix - Friday, May 5, 2017 - link
Samsung's 10nmLPP has a 15% reduction in power consumption compared to the LPE version.Also the 10nmLPE numbers about performance and power 27/40% are compared to the previous 14nmLPE not the Plus version.
Lodix - Friday, May 5, 2017 - link
And the 10nmLPU version is aimed to Area reduction.Anton Shilov - Friday, May 5, 2017 - link
Thank you for the corrections. You are right about the 10LPP, they made and appropriate announcement a couple of weeks ago, but somehow I've missed it. Fixed.Regarding the 10LPE vs 14LP*, I am not sure.
They state the following:
"Samsung’s new 10nm FinFET process (10LPE) adopts an advanced 3D transistor structure with additional enhancements in both process technology and design enablement compared to its 14nm predecessor, allowing up to 30-percent increase in area efficiency with 27-percent higher performance or 40-percent lower power consumption."
http://www.anandtech.com/show/10765/samsung-10nm-m...
If you look at the picture there (http://images.anandtech.com/doci/10765/dac.png), they mention ~30% performance increase at the same leakage power, which can considered as 27%... But if you happen to see some more up to date slides from Samsung, please link them.
As for the 10LPU, I guess, they are going to make an announcement in late May.
Lodix - Friday, May 5, 2017 - link
I see the arrow joining the 14nmLPE version with 10nmLPE.Lodix - Saturday, May 6, 2017 - link
In this pdf from Samsung they specified that the improvements stated are from 14nmLPE and not from 14nmLPP.https://www.semiwiki.com/forum/attachments/f293/18...
MajGenRelativity - Friday, May 5, 2017 - link
I know AMD will be using GF 7nm for their GPUs after Vega 10/11, but I wonder what NVIDIA will be using after this current Pascal generation. Does anyone have any clues?haukionkannel - Friday, May 5, 2017 - link
If They Are vice They use at least two different distributors just like Apple.melgross - Saturday, May 6, 2017 - link
Apple has moved away from that model. I doubt they wanted to do it, but neither Samsung nor TSMC could produce all the SoCs they needed that year, so they had to.It's also interesting to note that while Apple had to tune their designs to both processes, the TSMC 16nm was 20% more efficient than the Samsung 14nm process. We saw results of those tests either here or on arstechnica, I don't remember which now. But the total device efficiency advantage was under 5% once everything was taken together.
But still, it shows that we can't go by theory when extrapolating these supposed numbers to the real world. I'd still rather see Apple on intel.
The_Assimilator - Friday, May 5, 2017 - link
I would be extremely surprised if it was anyone except TSMC. Especially since TSMC has just announced 10nm is ready for H2 this year - which, not coincidentally, is when NVIDIA is rumoured to drop the first Volta products.The only GPU that NVIDIA has ever sourced from a company other than TSMC is GP107 from Samsung at 14nm. Even though Sammy's 14nm node is worse than TSMC's 16nm, GP107 is such a (relatively) small and simple chip that it didn't really matter. We'll probably see a similar story with Volta: TSMC gets the big Pascals, Samsung gets the small ones.
There is, of course, always the possibility that NVIDIA will stick with the now-mature (and cheaper) 16nm for Volta - I imagine it will depend on whether Volta is more (10nm) or less (16nm) powerful clock-for-clock compared to Pascal.
Kevin G - Sunday, May 7, 2017 - link
nVidia has been flirting with Samsung of late. I doubt they'd just exclusively to Samsung but they'll likely continue to have small/medium chip there as a testing vehicle if they need to quickly switch their entire line up over.Azethoth - Friday, May 5, 2017 - link
This is exciting news. The existence of GigaFabs means we must be getting close to the first MegaFab!ishould - Friday, May 5, 2017 - link
You mean TeraFab?LuckyWhale - Tuesday, May 9, 2017 - link
I wish Anton would work more on his writing or get a better editor. He writes English so mechanically and itis painfully obvious English is nowhere near his first tongue. I used to follow him at Xbitlabs. Great content but poor writing! sorry.ABR - Thursday, May 11, 2017 - link
Hmm, normally I'm a stickler about this stuff but with Anton's articles I guess I'm usually so immersed in the content that I don't notice anything!nhisaka - Tuesday, May 9, 2017 - link
In lịch tết 2018 : http://thietkelichtet.vn/bang-gia-in-lich-tet-bang...Thiet ke biet thu: http://katahome.com/thiet-ke-biet-thu
darkich - Wednesday, May 10, 2017 - link
Soo..no 7nm on Intel's roadmap?Seems like Samsung, GF and TSMC are on their way to leave it in the dust
peevee - Friday, May 12, 2017 - link
The "nodes" are just pure lies at this point. 45nm doubled gate density of 65nm, as expected. 10nm chips should have 20 times more transistors per area unit compared to 45nm. They are not even close. And there is nothing which is really 10nm, even a feature as simple as metal pitch is 40-50nm.RubiNBA - Monday, May 15, 2017 - link
The NBA live mobile Online additionally help gamers earn coins that are easy and quick. Since the primary resources in the game is coins, players that have use of unlimited coins have better opportunity to win the game as well as at a short time that is considerable.
Source : http://ow.ly/AHwi30aK5bD
thuantruongadw - Saturday, July 22, 2017 - link
Đất nền Long Hưng https://khudothi-longhung.comKhu đô thị Long Hưng https://khudothi-longhung.com/tien-ich-khu-do-thi-...
Đất nền long hưng giá rẻ nên đầu tư ngay https://datnenvensong.info/