Comments Locked

53 Comments

Back to Article

  • jrs77 - Thursday, May 12, 2011 - link

    ..but keep it x86 pretty please.

    I'm a fan of the current Atoms when paired with ION-graphics. I use them for my HTPC, my netbook, and I even use one as NAS that let's me run an MMO-client (EvE Online) for dual-accounting.

    If they can increase performance to a level of a, let's say, a 2GHz Core2Duo + HD5450 while keeping the power-consumption at it's curent level, then this'd be awesome.
  • DanNeely - Thursday, May 12, 2011 - link

    That seems possible. The closest comparison I could find was a 2.2ghz C2D, and it led the D510 atom by a factor of 2-3 depending on the benchmark. Dropping from 45 to 22nm will get 4x the die space to play with, so going quad core even on the same architecture will get you close even before process enhancements.

    http://www.anandtech.com/bench/Product/110?vs=65&a...

    I think the GPU gap's a lot bigger though. OTOH if the GPU is about 3/8ths of the die area of a dual core atom (half of a single core, with the cpu being one third, and the remainder being memory/io controllers). Even if the size of the memory/io controllers triples, keeping the die size the same would let Intel increase the GPU size to 7x it's current value (4 + 3 + 1 =8, vs 8 + 21 + 3 = 32); which might be enough extra power to pull it off.
  • duploxxx - Friday, May 13, 2011 - link

    why wait for atom, next gen of brazos due early Q1 2012 will have that level of performance perhaps a bit lower on CPU but right on for GPU :). atom 32nm shrink will be a dull.
  • Sihastru - Saturday, May 14, 2011 - link

    Why you say?

    Well, because E350 and all future APUs from AMD are driverless on Server 2008 R1/R2, they also have problems on Linux systems (like Ubuntu).

    AMD, in their moronic wisdom, decided to actively block their drivers to install on "Server" OSes. And not only APUs, also any Radeon GPU.

    On the other hand Intel and nVidia, even they don't hide the fact they do not support said operating systems, will still allow you to install their Vista/7 drivers.

    Not supporting an operating system is one thing, but actively blocking the driver installation is just stupid.

    That's why...
  • Griswold - Sunday, May 15, 2011 - link

    You and who else cares using these platforms with a server OS?

    An non-issue.
  • Sihastru - Monday, May 16, 2011 - link

    Just me and the 1000s of students that got that OS from MSDN for free. A low power system seems like a good idea to me to use with a 24/7 box.

    Here is another non issue: Flash Player 10.x lacks support for the UVD3 AMD implementation. All reviews on the interweb were done using an AMD supplied Flash Player 10.2 (=hacked), that normal people like us don't have access to. Doesn't sound like a big deal, but what it means is that you can't playback any 720p/1080p flash video without huge stuttering.

    Some other non-issues: http://www.anandtech.com/show/4134/the-brazos-revi...

    I could go on, but I'm bored.
  • zephxiii - Thursday, May 19, 2011 - link

    I've actually found old thinkpads (T60-T61) (C2d or Core Duo powered) to be great for playing around with OSes like that. Low power with builtin battery, keyboard, and monitor all in one. ...and the C2D has 64bit + virtualization.

    Right now i'm using a Thinkpad T43 Centrino 1.8ghz as a pfsense firewall/router and it uses 16 watts with screen on lowest setting....not sure what its using with its screen off (it's normal state) but I can tell you the older Thinkpad R31 PIII 1.1ghz I bought for 110 bucks to replace it uses 14 watts with the screen off.
  • strikeback03 - Wednesday, May 25, 2011 - link

    My T43 with the 1.86GHz processor used 10-11W with the screen closed. Hopefully i can revive it, it just completely died a few months ago and with moving and setting stuff up I haven't had time to check the fuses or bake the motheboard.
  • Prosthetic Head - Tuesday, May 17, 2011 - link

    I'm currently running an E350 as a linux powered HTPC - no problems.
  • Link20 - Thursday, May 12, 2011 - link

    Yeah but it will come out to what arm has to offer at the time.
    How can you be a fan of atoms?they suck. They are just cheap.only e350 net books are kinda adequate...
  • jrs77 - Thursday, May 12, 2011 - link

    ARM is not x86-architecture and therefore can't run the usual software.

    And I'm only a fan of Atoms, if they're bundled with ION-graphics. The E-350 wasn't around when I bought my three Atom/ION-based systems. Additionally Linux (Ubuntu) has problems with AMD-based systems, as AMD has crap driver-support for Linux all around.

    So for my needs, there's nothing better around then Atom/ION-based systems. They run all the software they need to run and they draw a maximum of 35Watts from the plug under load.

    Oh... and you won't find E-350 mini-ITX boards with an onboard PSU aswell ;)
  • poopypants_johnson - Thursday, May 12, 2011 - link

    AMD doesn't have problems with Linux drivers, in fact, over the past few years, they've established themselves as the premier Linux-supporting hardware company. Brazos isn't quite working properly yet with the FOSS Radeon drivers, but it works perfectly with the Catalyst blob. Nvidia's blob has been on the decline for quite some time, and Nvidia does not support the FOSS Nouveau driver like AMD does. For that matter, Sandy Bridge doesn't work right yet either, and there is no workaround for that.
  • Rick83 - Friday, May 13, 2011 - link

    If you need Video acceleration on Linux, forget AMD even exists. It's nVidia (VDPAU) or tears of frustration, that's how bad xvba/uvd2 support is.
  • Prosthetic Head - Tuesday, May 17, 2011 - link

    Agreed, the "AMD is rubbish on linux" myth is starting to really annoy me. In fact in my experience the relative performance of AMD and Intel chips on linux relative to windows favors AMD on linux (my guess would be most windows benchmarks are compiled with better optimization for intel chips).

    Nvidia GPUs have given me more grief recently than ATI/AMD. Nvidia used to be well ahead in terms of linux support but that just isn't the case any more. AMD are also being more co-operative in getting usable open drivers for their hardware out where as nvidia seem to be activly fighting efforts.

    The only area where AMD/ATI are behind on linux is in video decode acceleration support. It's apparently being worked on but at the moment not up to much.
  • Griswold - Sunday, May 15, 2011 - link

    You contradict yourself. If Linux is your OS of choice, then running any software you want to run on an ARM platform is possible.
  • ProDigit - Friday, May 13, 2011 - link

    You can run windows natively, without using as much battery as AMD's offer.

    Atoms are currently the only cpu's that can run Windows on a machine with long battery life.
    Usually ARM is in the likes of ~ 13 hours but don't run windows, Atom ~ 8 hours, AMD E-series ~ 5 hours.

    it all depends on the battery size of course, but I do think the 32nm Atoms definitely are something to be gotten.

    I do however not see any benefit in running an ION platform,as AMD outperforms it easily.
  • e36Jeff - Friday, May 13, 2011 - link

    according to the last Brazos test, E-350 has better relative battery life than a D550 + 3150(7.67min/Whr vs 6.69min/Whr), and its totally destroys any of the atom + ion configurations(best one was 5.48min/Whr). In fact, the only setup that beat the E-350 in relative battery life was the N450 + 3150(9.48min/Whr), which beat the crap out of everything in the test. But i think most of us can agree that an E-350 with 7-8 hrs of life is more useful than an N450 with 8-10 hrs of life.

    data from this test: http://www.anandtech.com/show/4218/amds-brazo-e350...

    and just for the record, two of the E-350's only dropped below 5 hours battery life during the h.264 test, and even the N450 dropped down to 5.5 hrs in that test. In the other 2 tests, they were above 7 hours. The main issue with the E-350 not getting great battery life is manufactuers not putting large batteries in them.
  • erple2 - Friday, May 13, 2011 - link

    that's not quite right - the DM1Z reviewed here (http://www.anandtech.com/show/4187/hp-dm1z-taking-... shows 8 hours, 20 min at idle, and about 5 hours 15 minutes "doing stuff" (which, incidentally, the ASUS 1001P gives about 30 extra minutes of battery life at).
  • ClagMaster - Thursday, May 12, 2011 - link

    Perhaps the FSB was retained all these years because its a technology that works and easly supports the bandwidth of the Atom processors until the transition to 22nm transistors.

    The objective of Atom was high performance-to-power ratios, not performance or embracing the latest and greatest bus technology featured with the Nahelem processors have because of their vastly superior performance.

    Evidently bandwidth requirements have come to the point where the FSB needs to be replaced.
  • qwertymac93 - Thursday, May 12, 2011 - link

    Atom having SB level graphics is insane? Just the opposite, Brazos already has better graphics then SB and that's on a 40nm fab tech. If intel can't match SB performance on a process that, in your own words might be as small as 1/4th the size, They've got some big problems.
  • starfalcon - Thursday, May 12, 2011 - link

    What?
    http://www.anandtech.com/show/4262/asus-k53e-testi...
    Looks like in these eight games Brazos can't match SB for graphics performance.
  • mianmian - Thursday, May 12, 2011 - link

    Keep in mind that E350 has a MUCH weaker CPU performance. The lower FPS would mostly due to the weak core.
  • starfalcon - Thursday, May 12, 2011 - link

    Yeah, it's only at 8.5 GBs of graphics bandwidth also while SB systems are at 21.3 GBs, seems to be pretty far behind in 3DMark also, which is that at least somewhat CPU limited also?
    Obviously not made for serious gaming but it's still faster than ION looks like.
  • poopypants_johnson - Thursday, May 12, 2011 - link

    Yeah, but that's comparing a 35w, 2.5ghz CPU to an 18w, 1.6ghz CPU. Graphics can't do it all by themselves, especially when the CPU has a low clockspeed. If you compare the GPUs:

    Intel's 12EU equals about 48 AMD SPs, running at ~1000mhz vs AMD's 80 SPs running at 400mhz.

    So basically, the horsepower should be comparable, but Intel's is far less efficient and scalable, and takes up far more die space. The reason Intel's next gen graphics only has 16EU is because their architecture sucks, it won't scale near linearly like AMD's or Nvidia's.
  • dagamer34 - Friday, May 13, 2011 - link

    So to sum up what you've said. Intel is about as good making GPUs as Microsoft is making a good mobile operating system. They've certainly improved from the past, but there are several examples of companies just doing it better.
  • StevoLincolnite - Friday, May 13, 2011 - link

    Can't forget drivers, they are probably the biggest issue, they are poor and buggy, they make AMD and nVidia's drivers look like they're gold plated in comparison.

    AMD and nVidia have massive driver development teams and have spent years honing and fine tuning the drivers for better performance and compatibility, Intel still has a long way to go in that regard.

    Heck didn't it take Intel a year or so just to add TnL and Shader Model 3 capabilities to it's drivers after the x3100 IGP released? Even then the TnL still sucked and Intel went with a profiling system so some games used software TnL.
    These days if it doesn't have an nVidia or AMD GPU I am simply not interested.
  • ProDigit - Friday, May 13, 2011 - link

    Just to interject,
    we all want to play games on netbooks, but they never where intended to run games on.

    If you want games, go with AMD, or wait until forever, when AMD and Intel become one, and they actually merge an AMD graphics unit on an Atom die.

    But until then, Intel won't beat AMD in the gaming section, and at best is a super efficient processor inbetween ARM and AMD, but running windows.

    Buy Intel if you need a device to do the basics with, and that needs a +10hrs battery life. Buy AMD if you play games. Intel and Nvidia (ION platform) is a dying breed.
  • duploxxx - Friday, May 13, 2011 - link

    watch those SB scores to be killed in 3-4 weeks when LIano arrives :)

    a HD2000 level would be to weak against competition in 2013, perhaps in the low power systems sure, but not the level of netbook
  • starfalcon - Sunday, May 15, 2011 - link

    Also note how a HD 5470 which is faster than Brazos gets beat by SB graphics, so Brazos graphics aren't better than SB.
    Would be impressive if Ivy Bridge gets to around Llano levels also, depending on the system.
  • starfalcon - Sunday, May 15, 2011 - link

    Also, it makes me laugh when someone is happy or saying Llano is going to kill SB graphics, think about what you are saying.
  • L. - Monday, May 16, 2011 - link

    Not going to happen.
    For Ivy Bridge to reach Llano levels, it would need a real GPU.
    So either one coming from AMD or one coming from nVidia.

    And besides, the comparison between Intel 2000/3000 and HD 5450 has been done by anand and it's really clear both are tied, with the AMD poor-man's board leading on the good games.

    The main thing to remember is the following :

    Brazos is CPU / GPU balanced for the modern days.
    Intel Core processors with even the 3000 igp are a joke, anyone buying those cpu's wouldn't care less for the on-die gpu.

    Llano is going to be balanced, just as brazos is, and will therefore not be in the same class as SB.

    Both not being in the same market segment means none will kill the other, but there is an extremely high chance all-round cheap systems built with Llano will do some damage on the market segments above and below it, because of the unmatched performance / price.
  • JasonInofuentes - Thursday, May 12, 2011 - link

    And naming chips for mountains in a punny way is hilarious. But can I just say that I really love that NVIDIA is naming their chips after superhero aliases. I can't wait to see which Flash they choose. And Green Lanterns! The possibilities are endless!
  • jjj - Thursday, May 12, 2011 - link

    "By 2013 we should be seeing smartphones based on Tegra 3 and 4 (codename Wayne and Logan)"
    Kal-El is most likely Tegra 3 even if they avoid naming it that (for now).Logan i guess might be Project Denver since it's in 2013 .
    The big question is what 2013 means for Atom,sampling or actual phones hitting retail.By 2013-2014 the ARM bunch might as well be on 20nm so Intel better hurry.I hope they get it out fast but I do wonder what is up with Intel lately since they keep messing up .Larabee,the G3 SSD,the always far from perfect GPU's,Atom and it's significant faults,P67-Z68.Maybe Intel is getting a bit too cocky (so much so that we need a mainstream socket and a high end socket- had to say it ,it just pisses me off).
    Anyway competition is fun and as Andorid,QNX,WebOS and maybe others (Apple got to unify it's OS sooner or later too) scale up ,thngs will be even more fun.
  • KMJ1111 - Thursday, May 12, 2011 - link

    The problem I see with Intel in the future is that as low end performance increases substantially, at a lower cost, they will cut into their more powerful chips. I mean, the bulk of everyday users and emerging market economy users will be advantaged in that ARM chips and low end chips should in theory offer daily functional use and little strain to power grids. At the same time, the middle range chips for Intel will lose some of their value in these areas because performance will be good enough. I mean, wasn't part of the problem with the original atom in theory that they had to limit their power so they wouldn't kill other parts of their market?

    If they have to compete with increasing ARM performance and power consumption, I see it as quite a predicament for Intel. Server wise they will probably remain dominant unless other aspects of the market change, but if they want to retain a market or gain brand identity in a market, they cannot limit the power of the lower end chips. I mean, sticking with atoms they do retain the ability to run X86 instructions, but my phone now can do many of my day to day functions and how useful the added extra processes are for average end users seems to be diminishing to me. Some of this probably comes from reading all about Google's presentations and their thoughts on how future computing will be run and distributed, but with Microsoft also building an ARM OS that will probably be able to strip out many of the legacy supports to offer a speedy experience, it seems like there is going to be a big change and computing power that is needed for the end user.
  • mckirkus - Friday, May 13, 2011 - link

    Yes. This. I've heard it called the "good enough" factor. Once it's good enough for most day to day tasks, battery life becomes a major factor in mobile/tabled devices. Even if atom is clock for clock twice as fast (due to OoE, cache, etc.) if it eats 4x the power it's a non-starter.

    Now we could see some interesting distributed alternatives to HTTP in the next ten years that require cpu/bandwidth to buy currency in a distributed social world (think bit-torrent share ratio), that works like folding@home with fancier networking, and that might swing things back in Intel's favor.

    In my mind, Intel needs to hire some math majors from CalTech to focus on killer apps for their soon to be overkill processors.
  • dagamer34 - Friday, May 13, 2011 - link

    While Intel can get away with what they call a GPU on it's desktop and mobile chips, I doubt anyone's that interested in using their GPUs when you get to the scale of smartphones and such, as OEMs really want a proven solution in their hardware. When you're competing against GPUs like the Adreno 220 (or 300 at that point) from Qualcomm, or the Tegra 3 or the PowerVR 5 series and eventually 6 series, I don't think Intel can afford to be a slouch in that market. They may have the best die process, but Intel's definitely shown they have a lot of difficulty using it effectively when it comes to non-CPU components (i.e. Larrabee and Intel GMA series).
  • Mike1111 - Friday, May 13, 2011 - link

    I'm not sure that Silvermont will really matter by then. Sure, if it's as good as ARM SoCs or even a bit better then Intel will get a few design wins. But I don't really see the incentives for most device manufacturers to switch, unless Silvermont is at least 50% better overall then the ARM competition (and not worse in any category) for a cheaper price. And don't forget that with ARM you have the choice between at least three high-end SoCs from different manufacturers or you can even design your own chips. And more and more stuff gets shifted to the cloud, too.

    By the end of 2013 we probably have devices with the second gen of Cortex-A15 SoCs @22nm/20nm with a quad-core Cortex-A15 and the second iteration of IMG's Series 6. And high-end tablets will probably have >=1080p resolutions, 2GB RAM and 3rd/4th gen LTE chips. What I mean by that is that we'll be with tablets where we are now with PCs: current hardware is good enough for most people and most tasks. Of course sofware and the whole ecosystem will play a huge role: e.g. even though last year's(!) MacBook Air is vastly underpowered in terms of specs, most people using it for most tasks will find it to be a snappy machine and good enough.
  • Shadowmaster625 - Friday, May 13, 2011 - link

    Can you get a review of one of these chips? (And/or the B810) I cannot be sure but I think the die size on this chip is 140mm^2. So at 22nm we're talking 100mm^2. Talk about an incredibly powerful chip for 100mm^2. If they took one of those chips, cut the cache in half they could easily make it 70mm^2. With the ultra low voltage operation improvements, as well as the reduced cache, it should be easy enough to get this under 5W. This chip would be a monster in a netbook. I'm more interested in this than in atom.
  • L. - Monday, May 16, 2011 - link

    At 22nm, you will have brazos-style chips from anyone in the arm-cloud + amd that will have the performance of a Llano in a 10 watts TDP more or less (and yes, llano is not out but you can imagine it beats my config : c2d @ 3.89 + hd4850).

    And thus .. what would be the point of a shrinked celeron, a shrink of the worst kind of cpu ever made, that is already so low on cache you can feel it.
  • iwod - Friday, May 13, 2011 - link

    Let just say GPU will be the same and most ARM and Atom will be based on PowerVR Gen 6.
    Then the only difference would be the CPU.

    I keep wonder, why do we have to keep all those MMX and useless instruction on a Mobile / Smartphone CPU when 99.9% of the Apps will have to be rewritten anyway. Why not take those out to spare the transistors.

    22nm 2nd Gen Atom with Out of Order WILL surely be enough for a smartphone, but at what power lv?

    I still think ARM will have the advantage in terms of power / performance and flexibility.
  • ProDigit - Friday, May 13, 2011 - link

    "I keep wonder, why do we have to keep all those MMX and useless instruction on a Mobile / Smartphone CPU when 99.9% of the Apps will have to be rewritten anyway. Why not take those out to spare the transistors."


    To keep compatibility!
    There are a lot of programs ran on older and slower computers (like netbooks).

    It'd be good if those old instructions would be removed from desktop processors, not from these slower machines.

    MMX is rarely used anymore when SSE(2/3/4) is available.
  • iwod - Saturday, May 14, 2011 - link

    To keep compatibility of what? Smartphone Apps? There are NO x86 smartphone apps!.

    That is why Intel should take this chance to re engineer x86.
  • ProDigit - Friday, May 13, 2011 - link

    It all sounds so nice, out of order, but by doing that you essentially get a celeron processor.
    I figure intel could finally get to an "out of order" system because the 22nm processors would have the power envelope to do this!
    Problems with 45nm designs where that the TDP was too big to fit them in netbooks and tablets.

    If I where Intel, I would keep on using an 'in order' processor, to maximize battery life.
    Essentially the new gen Atom processor would have slightly increased performance (about 20-30%) together with a ~50% more battery consumption. Give or take with a die shrink the battery consumption would be about 10% more than current Atom processors (in other words, ~10 hours of battery life become ~9;,, 5 hours become ~4,5.
    The difference is neglegable, due to the increase in performance.
    In other words, the step will be about as big as jumping from the N270 processor to the N475,or from the N450 to the N550.
    Not too noteworthy.

    If intel would keep their processors 'in order', they would gain battery life; compensate weakness by allowing dynamical overclocks, (turbo boost), and you'd have a processor that operates like a celeron (faster in some cases),with the power envelope coming very close to ARM territory; ON WINDOWS!

    The news for Out of order technology is a blow to my face, and makes me the more resolute to go for the 32nm Atom processors released by the end of this year!

    Out of order technology only benefit gamers and those who compress movies, or do heavy computing.
    For viewing webpages and checking mails in order technology is all you need!

    I think they are targeting the wrong market here!
    People rather would push a celeron or ULV Core2duo in their netbooks, than an underpowered atom processor!
  • ProDigit - Friday, May 13, 2011 - link

    ...That is, an underpowered atom processor that has a TDP of an underclocked but higher performing C2duo or celeron.
  • L. - Monday, May 16, 2011 - link

    Errr .. wtf ?
    aren't ulv c2d + gpu much worse than brazos in netbook use ??
  • Jaybus - Monday, May 16, 2011 - link

    Well, the proposed move to OoOE is just speculation, of course. For a higher performance tablet or netbook, it would make more sense to shrink the Core i5-2437M, which is already at 17 W TDP at 32 nm, So, I'm not so sure the new Atom will indeed move to OoOE. Wouldn't make sense. Atom and ARM are not in the same league as Core Mobile, either in performance or price. This would appease those for whom "good enough" is NOT good enough. We've heard the "good enough" argument for many years. But good enough for what? Good enough for how long?
  • dealcorn - Friday, May 13, 2011 - link

    Looking at its capital budget for fabs, dividend policy, and every word that comes out of their mouths, Intel is acting based on a sincere belief they have a very hot hand. Stay tuned in to see whether the early reports of Atom's demise were premature. I do not think this is a Zombie tale.
  • KMJ1111 - Friday, May 13, 2011 - link

    I don't know if the dividend policy is a sign that they have a hot hand. They may be realizing that they are more like a utility company in some way (in that they cannot grow enough to create stock growth so to add investors they add dividends). They have been pretty stagnant for the last 5 years, if you take into account the market fall in general and have a similar range trend out 10 years (though some higher peaks). By no means do I think Intel is going away, but I'm thinking they are in for a fight for consumers in the future as I see the market and emerging markets as becoming very different personally. I've read some analyst projections that say 2015 is the year for ARM with 15% market share, so maybe they are trying to stave it off, and they might be very able to with this new chip...but...I think it will be a challenge.

    They initially talked up the greatness of the atom for their bottom line and devices too if I remember correctly. I wouldn't see their blustering now as a sign of their future dominance. I mean, why not try to sell a product you are developing and creating to the best of your ability? To me I see it as them adding cost to a cheap chip with the new transistors, while having greater competition from cheaper and increasingly powerful ARM. That's not necessarily desirable. I mean, the fact that they have to fight to enter a market that is already established and has big name market players is never ideal. That is not an easy task. Not that I assume the revamped atom will be bad, but whether it will be able to land solidly in a market I have some doubts about. I wouldn't expect any less of them than to talk up their chips though; like they would say to everyone "Well this product is going to be crap compared to the competition and possibly a little more costly, BUT, we'll be able to give you a ton of them with X86 instructions..."
  • Slumberthud - Friday, May 13, 2011 - link

    Diamondville is the codename of the Atom 230, 330, N270, and N280 processors. The platform that combines the Diamondville processor with the 945GMS (Calistoga) chipset is called Navy Pier.
  • dealcorn - Friday, May 13, 2011 - link

    Actually, a decision to increase the regular dividend rate twice in a six month period is categorically not characteristic of utilities as it would imply poor financial forecasting in what is typically a stable, low growth business. In a cyclical, capital intensive business such as Intel's, this rapid rate of dividend increases does mean that management thinks their hand is pretty good. Similarly, the decision to increase the number of current process, high production fabs from 3 to 4 and the scaling up of developmental fabs suggests they see a need for this capacity. They may well be wrong, but their belief is sincere.
  • KMJ1111 - Monday, May 16, 2011 - link

    You probably know more about markets than I do, but I see many of the old name tech players now on par with utilities in certain aspects. I see their dividend increases more similar to Microsoft than a utility per se, but think the dividend increase is only to retain investment despite their good performance otherwise. They have grown to a point where stock growth is no longer going to be able to provide the investor the returns which one would look for, so they use dividends to retain investment. That was where I thinking the correlation to utilities came from as they have generally higher dividend rates in a trade off for lower stock growth (it seems to me). That was why I mentioned their pretty stagnant stock trading range for the course of 5 and 10 years. It might be a very poor comparison to someone with more technical knowledge :-)

    Anyway, I do think they think they have good technology, but I think they also have to worry about the level of performance in the chip they put out, as low end chips that perform good enough will cut into other areas of their margins IMO. Their competitors don't have the same worry as they are only increasing to reach more of where Intel is at and seem to be making some big strides at the moment. I own Intel stock and no ARM stocks (though I am looking at them), but don't expect to see much growth from Intel ever; at the same time I doubt they will disappear.
  • skyline100 - Monday, May 16, 2011 - link

    "By 2013 we should be seeing smartphones based on Tegra 3 and 4 (codename Wayne and Logan) and ARM's Cortex A15."

    AFAIK, the Tegra 2 is based on ARM's Cortex A9. And i thought the Tegra 3 and 4 are both based on ARM's CPU architecture too?
  • L. - Monday, May 16, 2011 - link

    Tegra 3 and 4 = ARM stuff + (this one is important) nVidia stuff
    ARM Cortex A15 implies anything else based on that from random people, including qualcomm and stuff.

Log in

Don't have an account? Sign up now