Comments Locked

44 Comments

Back to Article

  • Wade_Jensen - Wednesday, October 30, 2013 - link

    The. Could. Be. Awesome. In. Mobile.

    Please give 22nm to Qualcomm, intel. Maybe swapsiees for some integrated modem tech?
  • errorr - Wednesday, October 30, 2013 - link

    That would be a huuuge stretch. These chips in no way compete with any intel offering. They are limited use fpga monsters that are limited use.

    From the press releases from Altera these chips will be THE FIRST 14nm chips released for sale. The margins are huge and intel can monetize even the lowest yeild wafers before a node is mature.

    This is all about TSMC losing their most crucial customers. The question I have is how willing is Apple to pay for a process advantage giving their willingness to pay extra for silicon.
  • Kevin G - Wednesday, October 30, 2013 - link

    There is plenty of demand for TSMC to make up with these relatively low volume chips. These do pose optimizatoin problems as FPGA's were often the first designs to be fabbed to help the process mature. Just another customer with more complex designs will have to lead TSMC's new process roll out.

    Actually this is more of a threat to IBM's foundry business than TSMC's. If Intel's US based fabs obtained Trusted Foundry status, the FPGA customers IBM serves will likely migrate to Intel based upon the process advantage. This is where the really lucrative customers are as they're required by the US Department of Defense to use a trusted foundry and maintain a chain of custody during the manufacturing process to ensure the integrity of the chips. Trusted Foundry status doesn't add anything technical to the chips, just this oversight verification which comes at a very, very nice premium.
  • michael2k - Wednesday, October 30, 2013 - link

    Apple wouldn't even need to move to 14nm to see a process advantage. 22nm is sufficient given they are currently at 28nm.

    In other words, if the A8 in late 2014 is manufactured on Intel's 22nm LP fabs, they would still see tremendous advantage over their 2013 A7 that would make them more than competitive with anything Qualcomm could ship.
  • Kevin G - Thursday, October 31, 2013 - link

    How big of advantage would that realistically be considering that TSMC will be shipping 20 nm devices by the end of 2014? Sure, Intel's process has been in production longer and thus very mature but it has to be smaller than say moving from 22 nm to 14 nm at Intel. Thus there is motivation to continually be on the state of the art node at Intel for that process advantage to really pay off.

    Moving from Samsung's 28 nm to Intel's 14 nm process would enable roughly 4 times the transistor density. That'd allow the SoC to move to quad core, more GPU resources and a wider memory bus while using a smaller die area and less power. The only downside for Apple in this scenario would be Intel's premium pricing.
  • michael2k - Friday, November 1, 2013 - link

    Apple at 28nm is competitive with Bay Trail at 22nm. Project a year forward and Apple at 22nm will be competitive with Intel at 14nm or Qualcomm at 20nm.
  • Krysto - Wednesday, October 30, 2013 - link

    Meh. ARM chip makers are already moving to 20nm in 2014, and 14/16nm FinFET in 2015.

    Intel doesn't need to "give them" anything.
  • mikk - Wednesday, October 30, 2013 - link

    14nm/16nm from GF and TSMC is just a renaming, same process but with finfets.....Intels lead will be bigger than before
  • michael2k - Friday, November 1, 2013 - link

    What lead is that with Apple@28nm being competitive with Intel@22nm?
  • Krysto - Friday, November 8, 2013 - link

    How so? ARM didn't have FinFET so far, and soon they will, and they won't have to wait a whole cycle (2 years) to get it, after 20nm. That means the gap between Intel and others will be cut in half. So they'll be closer, not farther apart. Also, Intel is already seeing delays with 14nm Broadwell.

    The difference will be 1 year at most, and that's compared to Intel's most high-end chips. Atom still gets it a year later, so it's possible ARM chips will even be ahead in processing technology, compared to Atom. Also Atom is already barely competitive with last year's dual core A15 and Mali T604 GPU.
  • Hector2 - Friday, November 8, 2013 - link

    ARM moving to 20nm in '14 and 14nm in '16 ? Yeah, right. It's a lot easier to do it on paper than to actually make high yielding wafers. Intel built their first 14nm silicon in the lab a couple of years ago
  • Homeles - Wednesday, October 30, 2013 - link

    The idea is honestly ludicrous. Intel will be building their modems on their process in the "near" future -- why would they give up that advantage?
  • Jaybus - Thursday, October 31, 2013 - link

    Intel will almost certainly be building an ASIC modem chip, not FPGA. There are some advantages to the FPGA in terms of re-use, but FPGAs invariably draw more power than a dedicated ASIC. So this FPGA chip will be nice in that it can be used for many different designs, but Intel's will be an ASIC dedicated to one thing, like LTE, and so lower power.
  • Wade_Jensen - Wednesday, October 30, 2013 - link

    I know it'll never happen, guys, just having some fun.
  • tviceman - Wednesday, October 30, 2013 - link

    It's extremely wishful thinking to mention Nvidia or AMD getting a piece of Intel's fab action. All three companies are in a love-triangle competition and unless Intel wants to bring about even bigger obstacles getting into HPC and mobile markets, it will never happen unless the revenue from hosting competitors offsets potential losses (which I doubt it would).
  • JarredWalton - Wednesday, October 30, 2013 - link

    Unlikely, sure -- I've updated the text while you were posting your comment. Anyway, the most likely route this would take is that Intel would charge a pretty healthy premium over TSMC and GlobalFoundries for their latest tech, so they would simply pass the risk over to the fabless companies. But if they can get enough money, why not?
  • Kevin G - Wednesday, October 30, 2013 - link

    It boil down to long term cost benefit. If nVidia, Apple, Qualcomm etc. were to start using Intel's fabs, then it would be a PR nightmare for Intel's x86 designs. Effectively Intel would appear to throwing in the towel and their process lead advantage as a sign of defeat to ARM in the mobile space. This would happen regardless even if it was a good business move due to the premiums Intel could charge those customers. There is also danger in letting competitors get a foot hold with mobile designs as ARM server chips are on the horizon. Those would really eat into Intel's margins with the low end Xeon sales at risk. Antitrust regulators would also be watching to see if opening up to 3rd parties for mobile also translates to opening their fabs to ARM based server chips. Right now it is best to avoid these entanglements until it becomes absolutely clear that the market is going ot ARM.
  • tviceman - Wednesday, October 30, 2013 - link

    That would be the only scenario in which it would happen - if Intel can make more money off running their fabs for competitors (AMD and Nvidia) than they forecast making in the next few years in mobile and HPC.
  • beginner99 - Wednesday, October 30, 2013 - link

    It's much more likely that NV or AMD GPUs would be made at Intel than say any ARM SOC. Why? Well because the ARM SOC would be directly in competition with Intels own Atom-based SOC. This will only happen on a lagging node if at all. However Intel does not care much about discrete GPUs so that seems more reasonable but also, lagging node eg. 22 nm and never ever 14 nm. Since intel 22nm is probably better than TSMC 20 nm in terms of leakage and performance and TSMC 20 nm will be bought up by Qualcomm and Apple it does seem possible but still, pretty unlikely.
  • hodakaracer96 - Wednesday, October 30, 2013 - link

    but their GPU's are in direct compitetion with iris graphics. Mostly on the mobile side, but still, if intel gives discrete GPU's another bump they are just putting there integrated graphics that much further behind.
  • errorr - Wednesday, October 30, 2013 - link

    Acording to the press release it will be on 14nm. It will be a quad core A57 chip integrated with an FPGA.

    I see this as intel stealing the TSMC first adopters that drive new process node adoption. 14nm sample altera fpga chips are already being sampled. In the past TSMC relied on the fpga companies to subsidize the earliest runs of silicon until yeild was good enough for the gpu makers to step in.

    This is about protecting their lead in process by stealing the highest margin customers from other foundry companies. It will make it that much harder for TSMC to make new nodes profitable.
  • Khato - Wednesday, October 30, 2013 - link

    Close. It's a quad core A53 chip - http://newsroom.altera.com/press-releases/nr-alter...
  • JarredWalton - Wednesday, October 30, 2013 - link

    Text has been updated -- I found the EETimes link after the initial post. Long-term, there are still many possible routes this can take. I doubt NVIDIA/AMD/etc. will be fabbed at Intel any time soon, but the ARM competition could make for some interesting alliances.
  • Khato - Wednesday, October 30, 2013 - link

    Agreed. Taking the FPGA customers is a great first step as it's both high margin and the 'simplest' type of design to bring up. Of course this is going a step beyond that with the inclusion of dedicated logic, which implies that the work Intel's foundry side has been doing over the last years has come to the point where it's capable of supporting complex outside designs... which opens up the doors to many more customers.

    It'll be quite interesting to see where it goes from here. My impression is that, oddly enough, we're approaching a point where it makes sense for Intel to leverage its manufacturing lead by opening it up to the competition. They might not make quite as much per wafer as they otherwise would, but every wafer they sell would be one that TSMC/Global Foundries/Samsung doesn't. And given that the foundry business is all about the economies of scale taking away market share can have a pretty dramatic impact upon the long-term viability of the competition. Which is to say that it would be far easier for Intel to obtain an effective monopoly on the foundry business than it would the SoC business, no?
  • Dentons - Wednesday, October 30, 2013 - link

    Either you're right, and Intel is cherry picking the most profitable ARM customers in order to hurt the ARM fab ecosystem, or Intel has realized they cannot beat ARM in mobile, and face heavy losses if they don't jump into the ARM fab business.

    Your explanation seems most likely, but we cannot discount the possibility that Intel's is facing some truly dismal long-term forecasts.

    Consumer use of desktop and laptop computers is diminishing at an unprecedented rate. Tablets and phones are stealing a massive chunk of Intel's business. Upgrade cycles are being missed, many customers may never return.

    Intel has only in the past few months managed to produce SoCs able to match ARM's mobile products. Some would say Intel hit parity many years too late. Even if Intel were to have vastly superior mobile chips, they would still have a a terribly hard time competing with the ARM offerings.

    Mobile device manufacturers truly love the abundance of competition available in the ARM market. Even Samsung regularly buys externally developed SoC's for their top-line devices.

    What mobile device manufacturer desires a return to a world dominated by a single processor manufacturer? Especially when that single maker is a company with Intel's monopolistic history.

    Intel is facing a severe drop in market share. While this move may be initially focused on hurting TSMC, we cannot discount the possibility that Intel forecasts suggest X86 sales are set to plummet further, and that selling fab to ARM may be the only way to maintain revenues.
  • Krysto - Wednesday, October 30, 2013 - link

    The only reason Intel is even "beginning" to do this with non-competitors, is because they are starting to become desperate about future x86 chip decline. If they become desperate enough and are really cornered they MIGHT start to make competing chips, too, but I wouldn't expect it anytime soon - maybe in 2-3 years (IF it happens).
  • Homeles - Wednesday, October 30, 2013 - link

    Selling off excess fab capacity is not an idea borne out of desperation -- it's common sense.
  • Dentons - Wednesday, October 30, 2013 - link

    If that's all they're doing, and if chips they're building would be built elsewhere anyway, and if they're not removing capacity constraints, no, they wouldn't be hurting themselves.

    That's a lot of ifs. Reality isn't always so black and white. If the traditional ARM fabs run into trouble, Intel could end up saving the bacon of some large ARM deployments. It could also free up capacity at the other fabs, allowing even more, cheaper ARM chips to reach the market, stealing even more business from X86.

    Plans don't always work as designed, the competition gets a vote. Even this limited move could steal business from Intel's far higher margin, X86 parts.
  • extide - Wednesday, October 30, 2013 - link

    If intel really though x86 was going to go away, and ARM was going to be 'in' then they would ditch Atom and come out with an ARM chip of their own.
  • Krysto - Thursday, October 31, 2013 - link

    Let me put it this way - Nokia didn't think Symbian would go away for 4 years after the iPhone came out either. Otherwise they would've gone with Android earlier...
  • Krysto - Wednesday, October 30, 2013 - link

    Hold your horses. Intel is only allowing this for a company that isn't really their competitor.

    They wouldn't give this to Nvidia or Qualcomm, who are direct competitors.
  • JarredWalton - Wednesday, October 30, 2013 - link

    Of course they're not "giving" anything to anyone. I'm sure Altera is paying Intel a nice price for the chips produced there, enough so that Intel is willing to talk. If NVIDIA were willing to pay enough, Intel would likely talk to them as well. Of course, the costs for NVIDIA to do something at Intel are likely high enough that Intel simply buying NVIDIA would be more likely. ;-)
  • Dentons - Wednesday, October 30, 2013 - link

    An Intel purchase of Nvidia could have a tough time meeting regulatory approval. They may have to divest Nvidia's ARM division, and then, what's the point?
  • JarredWalton - Wednesday, October 30, 2013 - link

    The winky-face was supposed to let people know that I'm not at all serious about Intel buying NVIDIA. Ten years ago, it could have happened maybe, but not today. NVIDIA of course seems more interested in becoming more like Intel and building their own CPU designs, so we may see some interesting stuff down the road from the green team.
  • easp - Wednesday, October 30, 2013 - link

    A few years ago, when I first started the question of whether Intel could, in the long run, compete with the Merchant Fab + Fabless Semiconductor + IP Developer ecosystem, I never really considered that Intel would become a merchant fab.
  • sherlockwing - Wednesday, October 30, 2013 - link

    A53 is the key word in that announcement. Anand & Brain have said a few times that Intel currently don't have a Silvermont design that can compete with A7/A53 class chips in price, so that's a market where Intel can't get in without making ARM chips.
  • iwod - Wednesday, October 30, 2013 - link

    My exact feeling on today's PC performance. The Core2Duo, combined with a PCI-E based SSD and 8GB of Memory. While the geeks may not agree, larger then 90% people wont need anything more then that. And it has been this way for I cant remember how long.

    That is why the emergence of Tablet and ARM is seriously threatening Intel. Apple A7 28nm now, Quad Core Ax 20nm in 2014, And double that in 2015.

    So a decade after Apple switched Mac over to Intel, Apple will have created a Chip that is capable to replace it. Sometimes when you look back in times you are simply amazed at how much technology has leap and evolve.
  • code65536 - Thursday, October 31, 2013 - link

    Not really. Only on the low end are dGPUs rubbing up against Iris. iGPUs will not match high-end dGPUs for the forseeable future, not when high-end dGPUs are currently much more complex than the CPU itself.

    And it would synergize with Intel's high-end CPU products.
  • Krysto - Thursday, October 31, 2013 - link

    FYI, the 20nm process is already good to go, and we'll probably see 20nm ARM chips in smartphones early next year, BEFORE the 22nm Merrified in smartphones.

    Also, 16nm FinFET seems to be on track for early 2015 as expected, and even this Altera chip won't be made at 14nm at least until late 2014. So Intel really doesn't have any real process advantage anymore, especially in mobile. By 2015, ARM fabs will have pretty much caught up with them.

    http://semiaccurate.com/2013/10/30/tsmc-shows-prod...

    Seeing how the 22nm tablet version of Atom is barely competitive with LAST year's 28nm ARM CPU and GPU's, I can't wait to see how much ahead 14nm/16nm FinFET ARM chips will be of Intel's 14nm Atom in 2015 - probably close to 2 generations ahead in performance, if Intel is already a generation behind now, even though it has a node generation ahead of ARM.
  • azazel1024 - Thursday, October 31, 2013 - link

    I have to agree on the performance bit.

    I am maybe a bit more demanding than the average user, but my Core 2 Duo (E7500) was fast enough for all the basics I wanted to do. It feel behind in video transcode, but that was one of the few "really demanding" tasks I threw at the thing that I felt if was short in. Oh, and I hadn't played it at the time, but KSP probably wouldn't have been nearly as fun on it. 18 months ago, I upgraded to an i5-3570 upclocked to 4.0/4.2Ghz. I can't imagine upgrading the thing for a number of years now. It tears through pretty much anything I throw at it with aplomb.

    My laptop, an HP Envy 4t with an i5-3317u is the first "fast enough" laptop I've ever owned. About the only thing in it that makes we want to ugprade, are the graphics (just HD4000)...okay, and the screen, but that isn't a "processing power" issue. Depending on what Broadwell/Skylake deliver, whenever I upgrade the laptop, it just might be "fast enough" for a lot of years of use. Even now, if it wasn't for some of the games I play on the laptop, I'd probably be happy enough with how fast it is on the whole for years and years.

    I am looking at getting an Asus T100 for a tablet and occasional laptop use. The z3740 sounds like it'll probably be fast enough for everything I'd want a tablet to do and most things I'd want effectively a netbook to do. That sucker I can DEFFINITELY see wanting to upgrade in another generation or two of Atom processors though for faster graphics, CPU and more RAM.

    After a couple of generations, dunno. It might have hit "more than fast enough".

    Both due to processors tending not to get that significantly better between generations and, IMHO, computing tasks not getting significantly harder these days CPU churn is getting a lot lower. Go back a bit in time and I would have been upgrading a laptop ever 12-24 months because the newest thing was really just that much better to be worth the upgrade. Desktop has always stagnated a bit for me, but before the Core 2 Duo, I was also upgrading every 12-24 months. Then it was almost 4 years between desktop upgrades on my latest cycle and it just might be again (I am eyeing Skylake with the SATA express, DDR4 and PCI-e 4.0 support, plus hopefully some real measurable gains in CPU performance between Ivy and Sky, not a few single digit percentage points here and there). Its been basically 12 months on the laptop, but I really don't have an itch to upgrade it (other than the screen, it isn't the worst TN panel I've ever seen, but I really need an IPS in my next laptop), maybe in another year, or two.
  • azazel1024 - Thursday, October 31, 2013 - link

    Krysto, not sure where you are getting that. Bay Trail looks like it blows away basically all ARM CPUs right now.

    The only one that seems to be beating it out, is the brand new A7 chip, at least compared to the T100 and its z3740...which is not the fastest Intel Atom, and those are just browser benchmarks. The z3770 looks like it likely would beat out the A7 in just about everything, at least by a slight margin with its base and turbocore clock speed advantage over the z3740.

    Most other ARM chips, even pretty new ones (Tegra 4 isn't that old) seem to get spanked with the z3740 having a 20-60% advantage over them.

    Also it might well prove at the z3740/3770 is using less power than those ARM chips as well (hard to tell since can really only see wh per hour of run time for overall package power, but the T100 looks very competitive against the ARM crowd without having an idea of how much power the display and other bits are using).

    Unless Intel slips, it looks like they are dropping 14nm in Broadwell sometime in the 1Q of 2014, possibly before 20nm is available, and it'll be planar 20nm. The little I've been able to dig up says Intel is likely to drop Airmont/Cherry Trail sometime in Q2 or Q3 next year to follow shortly on its heels with Goldmont/Willow Trail around Q4 2014 or Q1 2015 with I assume the 10nm shrink sometime late in 2015.

    Merrifield is deffinitely "running late" in terms of phone introduction, but Intel looks set to stay easily at least a year ahead of its ARM competitors on process size and it still will have a technology advantage (FINFET versus planar) once 20nm drops for ARM producers. Also last I heard, most of 16nm FINFET for TSMC is going to be hybrid, with only part of the transistor being 16nm and the rest 20nm, at least at first.

    32 to 22nm was a full node for Intel. 28 to 20nm is a full node for TSMC and others. 22nm to 14nm is a full node for Intel. Unless I miss something, 20nm to 16nm is only a half node for TSMC and others...which means when TSMC goes to that, they won't have had as large a shrink as Intel will have...and Intel will be racing towards 10nm (on Atom) not too long after TSMC and others had just gotten to 16nm (and Intel might even get to 10nm before they get to 16nm, depending on Intel release capability/plans for 2015 for Atom).
  • djscrew - Saturday, November 2, 2013 - link

    10 nm in 2015? unless you're talking early enginerring samples, $5 says you're on crack
  • djscrew - Saturday, November 2, 2013 - link

    I would be shocked if any of these sub 20nm process nodes didn't get pushed back 6 months at least and more likely a year, especially those that aren't Intel or Sammy. Never mind the issue of yield.
  • abufrejoval - Tuesday, November 5, 2013 - link

    I can believe that Intel needs a broader revenue stream to maintain its fab advantage: The process advantage is constantly eroding and while the invests required to maintain it seems to also conform to Moore's law, the revenues obtainable through that process advantage are rapidly declining with saturation outside servers and servers CPUs evolving too slow or also gaining too little from the shrinks.

    Most of the transistor real-estate made available from process shrinks seems to go into caches. If I look at an x86 die photo these days, the only thing that stands out to me is these huge areas of totally regular structure implying cache. Easily 80 percent of the die area are cache while the majority of the rest may go to register renaming (also a cache) and floating point (totally useless on most big data queries).

    From what I understand about DDR4, first of all vendors are far more reluctant to move there than to look for alternate places to spread their risk.

    Next is that you really need buffer chips not just one per DIMM but one per die pack.
    Which immediately has me wonder why those couldn't move on the dies themselves, perhaps only with the first die in the pack acting as a gateway (don't think these register dies are really large or expensive to add to a memory die, even if only one out of 8 or so will actually be active).

    Then hearing about all these smart features like on-the-fly activation of spare rows I think back to graphics VRAM days 30 years ago, when VRAM included BitBlt helper functions like fast clear or color expansion to enable 1080 graphics on 8MHz (effective) GPUs like the TMS34020.

    The idea was to take advantage of being able to manipulate RAM not in bits but entire rows adding a few command pins.

    Many compute workloads today involve ultra fast searching of patterns and engineers throw megatons of silicon and watts moving huge quantities of bits and data on super wide but long highways to solve the compute problem at a distant CPU cluster while using one bit out of millions as far as RAM is concerned: It’s an enormous waste of silicon real-estate and power hungry CMOS state transitions.
    Clearly partitioning the load and moving it closer to the RAM seems the smarter approach, and why not go all the way and move the CPU power towards the memory (or the memory into the CPU) effectively producing something capable (among other things) like a map reduce chip.
    Coming back to the VRAM with built-in BitBlt and FPGAs at 14nm it would seem to me that a hybrid DRAM FPGA with a bit of ARM cores sprinkled in to do the FPGA reprogramming and some housekeeping could just be the ticket to producing application specific smart RAM on the fly, capable to do simplified operations on entire ROWs of RAM which either ARM or x86 CPUs could gather for meta processing on byte+tag sized DRAM result/content ports.
    In that scenario having to deal with JEDEC and lots of DRAM manufacturers would not only be a huge burden, but you also wouldn’t want to give valuable IP away.
    So I wonder why Intel doesn’t use all that excess FAB capacity to move into the production of SMART RAM, accelerating light years beyond the server competition.
    Thankfully this idea is a) totally crazy and undoable and b) nobody at Intel will read it anyway which is why I feel so free to post it here ;-)

Log in

Don't have an account? Sign up now