That's totally unrelated. That is a test chip for people to mess with, and figure out where the hardware itself should go, and how software needs to be written for future many-core CPUs.
The kind of fine-grained complicated multithreading of ages past is simply not easy enough to code to be worth using (hence single-threaded apps everywhere that could be pervasively multithreaded), and can be a hindrance to scaling as much as a help. Beyond maybe a dozen decently powerful cores, there get to be many unknowns, and many possible bottlenecks. On top of that, now that we're scaling out by more cores, there's no reason to stop doing so, even if disruptive technology can get us going faster, again--accepting faster speeds as a matter of course has been an invisible crutch.
This will give research folks the ability to actually mess with code on such a computer, so as to be more ready to make the future CPUs work well. Also, such CPUs, with superior, but still weak, cores, may end up being good server CPUs, in the future.
Larrabee "Prime" is a failure due to Intel doing what Intel does best: pointlessly burning away money, and creating hype. They should have quietly gotten a small group of engineers to make a solid core design (the Isreali guys would be a good bet, following history), then once it was working well internally, gotten other groups involved. Once the 2nd or 3rd generation was good internally, polish it up, release it for niche markets, and move out from there.
I will tell you why! x86 is a dead end and that has been known for decades now. It it only the billions of Intel that keep it afloat yet.
Maybe that would make Intel realize that they have to move on and bring something new, something groundbreaking that is ready for the future!
[quote]
"Maybe that would make Intel realize that they have to move on and bring something new, something groundbreaking that is ready for the future"[/quote]
They did years ago and created a new "groundbreaking" 64bit ISA which has become a "dead end" and barely afloat .. the Titanic disaster .. Itanium
It has been foreseen by many engineers that the x86 architecture is not really suited for graphics.
There are several reasons, but one of the most striking for me, is that the x86 architecture is not really power efficient.
And with chips that are overclocked (turbo) to a certain thermal value instead of to a fixed value, would mean that the x86 architecture larabee cards used for graphics would be outperformed by the structure or architecture used by ARM.
Graphic cards of today are based on some sort of deviant of ARM architectures.
Still, Larabee's technology might be very interesting in servers for cloud computing, and home PC's, where the motherboard holds place for 4, 6, 8, or 16 cores, that will,and where one can upgrade by just buying CPU cores.
I see a future in Larabee CPU's, not GPU's, unless they go off the x86 structure which they have been holding for years, and which now has become slightly outdated!
Vaporware hits again...How's it feel to be hoodwinked?
Can you admit you were hoodwinked?
I cannot believe how many tech reporters jumped on Intels hype train.
Amazing that no actual product arrived, and yet all these financial analysts affected stock prices for the last year on vaporware.
You have to be stupider than stupid to think Intel can execute in just a few years what GPU companies like NVIDIA and AMD have been eating sleeping, drinking for 24/7/365.
Hype wins again, and yet a few of us had enough common sense to know the claims of dethroning discrete GPU's was impossible on any first several gen products.
I'm actually rather disappointed with the tech sites that bought into the Intel hype, it's not the first time. It showed peoples lack of common sense and experience in the field, and how easily they are still swayed by PR.
High end 3D gaming Graphics is kind of becoming a dying business on PC unfortunately due to console ports. As someone who upgraded GPU's almost every cycle beginning from monster 3d, I now no longer can justify upgrading until the next console comes out. GPU business is ripe for change and will require it but Intel has alot yet to learn.
The concept seems so simple and looks much more efficient. A top of the line CPU on one side, a top of the line GPU on the other and a Fusion APU in the middle. They just created a new market and have the industry backing it. Should prove to work out pretty well for their future...
"The software side leaves us a bit more curious, as Intel normally has a strong track record here"
On the contrary, Intel historically has been terrible at software. They killed their UNIX expertise just as Linux started, Itanium was proof they don't understand software at any level and they wrongly opposed a move to 64-bit software on x86, losing billions in share to AMD. No surprise they are last to the parallel software party.
The people now in charge at Intel got on the launching pad to the executive suite in the early 90's. That was a time when the chip guys at Intel were doing all they could to "control" the software guys. The chip guys didn't understand software and were threatened by it. They wanted to leave all the software side to Microsoft while Intel focused on line-width and manufacturing. The chip guys won and people like Paul Otellini (nice guy, but clueless about software) took the helm. That's why software is a side show at Intel, why they learned all the wrong lessons from Itanium and why over ten years later when for the first time just turning the crank on x86 won't cut it and they really need a new software paradigm for Larrabee they don't know to do it.
NVIDIA does. They understand the central role software plays in GPGPU. Look at recent events like GTC and the big news is not so much Fermi as it is the presence of numerous software teams who have been working with NVIDIA GPGPU for three years. Fermi in key part is based on the lessons of those software experiences, expertly done by NVIDIA based on experience, not guesswork.
Intel has also been weak at working with paradigm-breakers because they can't get out of this "I'm Intel and you're dirt" mentality they have. Give them advice and they feel threatened. NVIDIA doesn't have that problem. Some of the most advanced work shown at GTC came out of companies nobody every heard of before, to NVIDIA's benefit.
"To Intel’s credit, even if Larrabee Prime will never see the light of day as a retail product, it has been turning in some impressive numbers at trade shows."
I think this was probably the main reason Intel decided to go ahead and cancel the Larrabee hardware. The tFlop benchmark numbers and the "real-time ray tracing" demos were creating massively unreasonable expectations in the public mind, expectations that Larrabee would not be able to fulfill.
IMO, "Larrabee" has always been heavy on the PR side and very light on the hardware/software side. Intel oversold the concept to the degree that they were not able to actually make a product that would live up to the promises made and the capabilities implied. I think Intel wrung every last bit of PR promotion out of the Larrabee concept before it officially "killed" it. Much better that Intel cancel Larrabee now than go ahead and try to market it and watch it fail catastrophically.
This was a smart move for Intel, and one that I did not expect. I figured they were going to release it come hell or high water like when Intel pushed Netburst.
Personally, I'd rather see Nvidia and AMD join forces somehow and merge their high end HPC/GPGPU technologies into a cohesive seamless technology, and head Intel off at the pass. This would likely never happen for several reasons, the first being that it probably wouldn't pass the SEC and FTC regulatory smell tests. But I'm just *salivating* at the possibilities of the two doing a joint venture. One thing I do know: Nvidia will have to do something -- and real soon. They aren't just getting squeezed out by Intel and ATI not needing their chipsets, but then there is Lucid's Hydra chip which may eventually render both Crossfire and SLI *moot*. Off course, this would affect ATI, but could really hurt Nvidia far more.
Merging with AMD would solve their lack of a "ilicense" agreement.
Another possibility might be Nvidia buying out VIA, but to what end, I'm not really certain, and would be too little too late.
AMD did try to buy out Nvidia but Nvidia requested that they be CEO of the new company and hence AMD moved on to ATI... Nvidia could however combine ARM cores with their GPUs like they did similarly with TEGRA using a different type of GPU. This way Nvidia can still forge ahead with a serial and a parallel processor in one and not have to rely on x86 for HPC.
The reason for Intel not buying Nvidia is it would probably run into problems with anti-trust laws.
I seriously don't think the government would allow the number 1 CPU maker and number 1 GPU maker to merge for the reason being they would become too dominant. With the combined technology and engineering assets under one roof, AMD/ATI would not be able to compete.
You might argue Nvidia isn't currently #1 in the GPU arena, but it's just a formality as Fermi will allow Nvidia to retake the #1 spot. Nvidia gave up their lead temporarily to improve on general purpose GPU support.
This is no surprise. With all their R&D money they cant even make a passable driver and control panel for the current integrated "extreme" video chipsets. Thats not even touching on the chips abysmal performance. What revision is this now? 4? each spin touted as "even more extreem" than the last yet all you can run is Unreal 1 at 800X600...
This news basically validates all of the various criticisms leveled at Larrabee over the last few years: that Larrabee wouldnt' be a competent raster GPU, too much die space was wasted on x86 instructions, and Intel was spending far too much effort and R&D on peripheral features, like ray-tracing, instead of focusing efforts on making the part competitive.
It also helps confirm Intel was never really developing Larrabee with discrete graphis in mind, it was always a reactionary response to Nvidia's GPGPU and HPC efforts. I imagine that's why Intel is keeping the project alive. As it is now, same as it was a few months and years ago, Larrabee serves no purpose as it was highly unlikely to serve as a competent discrete 3D raster-based GPU and is actually a conflicting interest for Intel in the only area it might've really been competitive, as a highly parallel x86-based CPU for HPC.
With an error like this, I doubt now the overall quality of the article. Probably, the only thing that is going to be canceled are the old S775 QuadCores ;-)
I was in a rush and grabbed that image from our Larrabee deep dive article, believing it to be a short of Larrabee. Clearly that's wrong, and I've pulled the image. Thanks for catching that.
I wonder what that means for 2012's Haswell architecture. It was supposed to unite the CPU with Larrabee extensions (successor to the on-die non-x86 GPU core). If Larrabee isn't ready until 2012 that could be a big problem.
[quote]and Intel is still hard at work developing their first discrete GPU.[/quote]
Just to correct an error in this Article that so many people and tech websites seem to make the error on.
Larrabee would not have been Intels first Discreet GPU, they produced the Intel i740 which was there first Discreet GPU almost a decade ago, which used the AGP bus, and was then the basis for the Intel Extreme based Integrated Graphics Processors for years after that.
GPU is a term coined by NVIDIA to differentiate itself, but that just stuck around to encompass every graphic chip. It's silly to argue over its semantics.
Current GPUs do more than graphics, and ATi's chips should be called VPUs, but we all keep calling them GPUs anyway.
Larrabee IS the first GPU Intel has created in-house. Intel didn't design the i740. That was a joint venture with Real3D. They essentially provided the manufacturing capabilities while Real3D provided the engineering and design.
Bear in mind that i740 wasn't designed completely in-house; it was a joint project with Real3D/Lockheed. Larrabee is the first discrete GPU that Intel is designing entirely in-house.
I'm aware of that, but it was still badged as an Intel Graphics part and thus there first discreet Graphics card. - Glad you made the correction to the article however.
Larabee is a huge undertaking in x86 speak -to balance speed, low wattage, and tuned software was simply too much to do- if possible at all. Now Intel will have to leave it to the experts...does this mean Intel will be buying IP licenses from AMD? Or does this eye-opening occurrence bring the industry closer together on things like standards?
As I have said before, AMD has the blueprints and ideas for their next-gen products, and they would be dumb not to share an idea or two if it were to help set de facto standards for next-gen HPC/GPGPU’s ;that spirit could possibly be evident in the (light) 1.5 billion they agreed on. We'll see what happens
asH
Intel already has access to at least some of ATI's patents due to the new agreement between the two companies. But having access to patents is hardly a gateway into making a great GPU, you still have to make build the thing, and more importantly you have to support it on the software side. Both Nvidia and ATI have been doing drivers for so long, it's a well oiled machine. Jumping into the fray from scratch is incredibly challenging.
As for Intel making a discreet part before, the i740 was too slow when it came out, a big reason it never gained a foothold. Sadly, some of the tech still lives on in Intel's GMA, the worst thing ever to happen to graphics.
"Intel already has access to at least some of ATI's patents due to the new agreement between the two companies"
It's my understanding that the cross licensing agreements between AMD/ATI and Intel do not cover each other's respective GPGPU technologies, only the x86 side of things and extensions.
Why would cross patent licensing between AMD and Intel include any ATI intellectual property? Disputes between the two companies have always been about CPUs, not GPUs.
Because it does? According to Dirk Meyer's comments, the new agreement as part of Intel's cash payout to AMD includes renewed cross licensing that includes ATI GPU tech.
What we don't know is the actual scope of what is included, but at least some is.
No, the crappiest was the S3 Savage 2000, Intels TnL implementation actually works, S3's wouldn't function, and if you could get it to function you would get massive amounts of graphic anomalies.
However S3's IGP's are far superior to Intels these days as they are based on the S3 Chrome chip.
Intels main issue is the drivers, they plainly suck, they should take a page out of nVidia and ATI's driver development work and implement a similar strategy.
The Intel x3xxx series had massively varied performance, Direct X 9 performs poorly, Direct X 10 even though the X3100 supports it, will never run with acceptable image quality and thus performance. (If the game ever decides to work), and Direct X 7 and 8 based games, even if they are over 15 years old still perform poorly.
Hence the term "Intel Decelerators". - Personally I would prefer a GMA 950 over the x3000/x3100/x3500 simply because it's faster, and can be overclocked with GMA Booster, despite having a lacking feature set.
Until Intel get there act together concerning there IGP's and drivers for them, I will never take them seriously for a discreet part. - As -every- single piece of Graphics hardware to come from that company has simply sucked.
I've always been fairly skeptical of Intel's ability to successfully jump in to the GPU market. There's a reason why AMD purchased ATI instead of trying to roll its own GPU's, and there's a reason why nvidia doesn't make CPU's. A world of difference exists between the two, and just because a company has a solid CPU engineering pipeline, it doesn't mean that the same resources can be leveraged to make a solid GPU.
In any case, I think this is a big win for AMD's Fusion architecture. With the failure of Larrabee it now seems very likely that they will be the first/only company to be offering an integrated CPU/GPU combo, or at least the first company to be offering a *decent* one. Purchasing ATI has already proven to be a good move for AMD, and the demise of Larrabee makes it an even better one.
Which is quite surprising in retrospect. When AMD purchased ATI, it was heralded by many as one of the worst business deals of all time. Too expensive, ATI tech is not good enough, too much money to pay just to be able to offer a complete platform. (many believed AMD would outright give up doing high end discreet parts)
Yes Intel will be first with a CPU/GPU hybrid, but marrying such a terrible GPU to an excellent CPU yields a very unbalanced and strange piece of silicon. Sure, Intel could potentially improve their GPU, but how many years have we been hearing that was going to happen? Performance is still incredibly lackluster. Wasn't the whole idea of Larrabee in the first place? Get close to the class leading performance, or at least in the same universe.
Ironic how AMD ends being the only one out there with the tech to make a balanced Fusion type product.
"Which is quite surprising in retrospect. When AMD purchased ATI, it was heralded by many as one of the worst business deals of all time. Too expensive, ATI tech is not good enough, too much money to pay just to be able to offer a complete platform"
I remember the criticisms of AMD's aquisition of ATI very well. At the time, I too was a bit concerned - not because it was a bad idea for AMD to buy ATI, but it was a *bad time* to do so considering the legal entanglement with their suit against Intel. While AMD needed to become a complete platform vendor, the expense of chewing two big bites (aquisition + Intel suit) darn near choked AMD.
Strangely enough, the economy tanking may have actually worked to AMD's advantage with regard to the ATI aquisition because it forced AMD to focus on the mid level and low end market segments well ahead of Intel -- essentially beating a 'tactical retreat' and digging in there. ATI, the very thing that almost *choked* AMD is now the very corporate asset that is keeping AMD competitive with just enough cash flow/revenue stream to be able to withstand the losses AMD has had to suffer for as long as they have. Roll the clock ahead a few years, and a nice healthy settlement with Intel, and now there is hope for AMD to actually be able to work back to being competitive in the high-end market segment again - especially with the in-house manufacture limitation *gone* (allowing AMD to go completely fabless).
"Which is quite surprising in retrospect. When AMD purchased ATI, it was heralded by many as one of the worst business deals of all time. Too expensive, ATI tech is not good enough, too much money to pay just to be able to offer a complete platform"
I remember the criticisms of AMD's aquisition of ATI very well. At the time, I too was a bit concerned - not because it was a bad idea for AMD to buy ATI, but it was a *bad time* to do so considering the legal entanglement with their suit against Intel. While AMD needed to become a complete platform vendor, the expense of chewing two big bites (aquisition + Intel suit) darn near choked AMD.
Strangely enough, the economy tanking may have actually worked to AMD's advantage with regard to the ATI aquisition because it forced AMD to focus on the mid level and low end market segments well ahead of Intel -- essentially beating a 'tactical retreat' and digging in there. ATI, the very thing that almost *choked* AMD is now the very corporate asset that is keeping AMD competitive with just enough cash flow/revenue stream to be able to withstand the losses AMD has had to suffer for as long as they have. Roll the clock ahead a few years, and a nice healthy settlement with Intel, and now there is hope for AMD to actually be able to work back to being competitive in the high-end market segment again - especially with the in-house manufacture limitation *gone* (allowing AMD to go completely fabless).
Integrating Intel's GPU into Sandy Bridge is aimed mostly towards the low end (very low end) segment where FPS in Crysis are meaningless. Practically all the enterprise market and quite a lot of the retail market (desktop/laptop) can settle for a cheap GPU that can do anything but play new games, take zero space and very little power. This will also be the first time an integrated GPU is manufactured on the same process (32nm) as the CPU. Sandy Bridge's successor "Ivy Bridge" will be a 22nm part with even lower power and better performance.
My work PC is a laptop with Intel graphics and it does the job - connect OK to various screens and projectors.
The rant about the performance of the integrated GPUs is ridiculous - for proper game play you must buy a 100W or more GPU. You'll never ever get the same performance from a 5W part during the same generation. Since the game companies always aim towards the mid-high end segment, my statement holds.
I disagree about your assertion that the performance of IGP not mattering in the low end segment.
My laptop, an GMA4500 HP, struggles with heavy flash. The reason partially lies in the weak laptop CPU, but a lot of the lag does have to do with the crappy Intel IGP.
Elsewhere, we have the older X3100-based Macbooks which struggle even to play YouTube in standard definiton, or run a Java game at full speed. Again, those things are slowed down by the Intel solutions, as on a Ion-based system it wouldn't happen.
HD-movies also will benefit from beefier graphical solutions - just look at the difference between Ion and GMA4500.
Exactly, gaming performance is no longer the only reason to want a decent GPU in your system. As Blu-ray drives and HD content continue to become more widespread, and as more applications start going the GPU accelerated route like Flash, the performance of the GPU is going to become more important, even for users who never play a demanding 3d game.
As I said above, the failure of Larrabee leaves AMD in a relatively strong position for the next couple of years. The only question is whether or not they'll be able to execute on it.
This writeup has some odd assumptions. You are saying that the architecture itself is sound, but there is some hardware problem, presumably Intel cannot pack enough cores using current fab tech to make it competitive in traditional graphics?
Well if that's so, then the architecture is NOT sound. Waiting for a better process node is pointless, because your competition will move to a new one as well, combined with much better performance. If the architecture was truly solid, then it would be a competitive part if it was made today. Waiting to make the hardware viable and competitive is a losing battle because your competition never sits still, which is exactly what has happened to Larrabee as it stand today.
Larrabee is just not a good idea for traditional graphics rendering.
Larabee looks good on paper, but working out the details is much, MUCH more problematic.
Which general means something is overlooked or the design is more complicated than it needs to be or they're taking the wrong approach to the problem.
In the case of Larabee concept, I think Intel is taking the wrong approach to problem.
GPGPU is in fact not about "cloud-computing" or stream-processors or compute-shaders... GPGPU is in fact largely a misnomer, (a red herring, if you will) which often causes a problem of aiming for the target.
What Intel or AMD or whoever wants to get ahead in next stage of the computer evolution is to build a faster floating-point processor. Which is EXACTLY what Nvidia has done... inadvertently, because the requirements for building a super-fast 3D accelerator requires huge amounts of floating calculations... and when Nvidia bought Aegia PhysX they stumbled on yet another piece of the next computer evolutionary stage, software/hardware physics engine... perfect for doing simulations scientific and otherwise.
Think about what is common in stream-processors and compute-shaders and supercomputing... number crunching... FLOPS (floating-point operation per second).
To put it in the simplest terms what they're aiming for is a massive floating-point processing unit array...
Why Intel shouldn't try to build a GPGPU is, in my opinion this...
They became too dominant in the CPU arena to the point of overlooking THE one major area that they could have excelled the field of supercomputers.... floating point processing and they lost sight of that target when CPUs for personal computers surpassed CPUs designed for super-computers and laps into complacency.
Floating-point calculations... as far as I know, the SSE (Streaming SIMD units) on multicore processors aren't coordinated to operate as a single unit... something for AMD and Intel to look into. The area might be worth looking into for various reasons I won't go into, yet.
The approach one I would recommend is looking to coordinating the SSE units on multicore CPUs from the software stand point and work back to how to improve the hardware design in the CPU for more efficient operating conditions.
The area of focus I would recommend (if I was making the choice) would be Blu-ray (1920x1080 resolution, H.264) decoding. With a the combination of 4-series integrated graphics device and a dual-core processor, there should be enough processing and memory bandwidth to play back Blu-ray movies without dropping any frames. At least that's my opinion anyways.
Why focus on Blu-ray... Blu-ray requires lots of computing power... and it's a more popular format than DVD (or it will be and why someone should start working on a solution now rather than later). Blu-ray movies don't play back all that well on a lot of laptops I've tested.
Another reason for Intel to work this out is Nvidia looks like they're going to take another chunk and/or create a new market segment... set top box sized computers using another flavor of their general-purpose GPU to play-back Blu-ray using dual-core ARM processors.
The area they've overlooked is how to improve computing power... their answer is just add more cores and hyperthreading.... all the while neglecting the SSE units, which could be utilized better.
I mean the whole processor idea could use some serious rethinking.
Set-top box media computers all the way up to supercomputers.
Various types of servers for example don't need floating-point processors at all... (file, print, search engines, databases, etc.)
This is exactly what I expected after that weird out-of-the-blue announcement of the suspiciously Larrabee-like 'cloud CPU' with '48' 'very simple x86' cores (read: mesh of 48 in-order x86 CPU's)
Too bad, the graphics market could use the competition (especially since nVidia seems prepared to cede control of the market to ATI).
I wonder if this has anything to do with the anti-trust (that is - probably not a good idea to invest a bunch of money entering a new market place when you're about to get fined billions of dollars and may be told at the end of a long an expensive development project that you can't release the product due to anti-trust sanctions)?
I'm still more partial for a software problem. The article may state: "The software side leaves us a bit more curious, as Intel normally has a strong track record here", but that's only in relation to the purely X86 ecosystem.
Given Intel's GPU track record however, that statement is not true at all. Their integrated graphics have underachieving for almost their entire existence - lot's of promises made ("feature x in next driver revistion", ...) but little delivered.
Moreover, there's no mention of the new extra-wide vector instruction set from Larrabee in the new 48-core chip, nor would that really be especially useful in its intended "cloud-computing" target market.
I even wonder if Larrabee will have a successor, since it sounds like the initial product is a pretty major disappointment. It's easier for Intel to say the product is not totally dead, but an announcement like this just keeps investors looking to the future. After all, Intel has a very long way to go in a segment of the market where they have totally sucked for a long while. Could Larrabee become another Itanic, where Intel's dream is to take over a segment with their own architecture, only to have it become a niche product?
Intel NEEDS a GPU. AMD is going to integrate the GPU on the processor die and eventually move all the fp work to the GPU. How will Intel's CPU's compete with a TFLOP of compute power?
If you mean Intel needs to build their own GPGPU, I think you're wrong (my opinion). There are already two very successful companies who already make better GPGPUs than Intel... namely NVidia... ATI/AMD, a distant second. But even AMD/ATI is doing better than Intel, otherwise they won't have canceled Larabee (my opinion). And building their own GPGPU they've overlooked something critical... something very critical. The way Intel and AMD are handling the direction of CPUs... they're just adding more and more cores and perhaps wasting valuable transistor real estate that could be put to better use like more cache memory.
They're shifting attention away from where they should be focusing... improving the overall system design of the computer. By improving I don't mean adding more cores to the CPU, but rethinking the CPU/GPGPU relationship and redesigning the whole computer system. They need as they call it another "paradigm-shift".
And integrating a GPU to the CPU isn't really a good idea. You get unnecessary design complexity and end up with a lower performing CPU/graphics design... in engineering textbook this would be a no-no unless you are going towards perhaps a low-transistor/low-power design, which won't hold a candle to the high-end discreet CPU/GPU design.
How will Intel's CPUs compete with a TFLOP(S) of compute power?
Easy... They need to rethink the CPU and GPU... TFLOPS... tera (trillion) floating-point calculations per second... where does it come from... look underneath the hood and you find Stream processors... and what do all Stream processors have in common... they do floating point math...
128-bit floating point math.... There used to be a time when CPU had a discrete math processor call the FPU (80-bit floating-point processing unit), but rather than create multi-core FPUs (because the concept/technology/design methodology and basic knowledge to do something that wasn't available at the time) they integrated the FPU into the CPU (irony).
The reason for the reemergence of floating-point processing is because they are HEAVILY utilized in 3D graphics and their use is universal in computing, especially simulations.
The only thing they don't do is integer operations and this is the part where the CPU design engineers need to look into... creating a discrete part with a massive array of SSE ("Stream"ing SIMD extension) units with some limited integer operations... maybe.
The CPU might be stripped down of unnecessary components and instead integrate with the chipset to become a super fast systems I/O manager... managing I/O, thread manager/scheduler, system watchdog, etc., etc. They've already moved in that direction by incorporating the memory controller hub into the CPU.
There are some details I have left out since it's beyond the scope of this forum/venue.
But it is my belief that traditional multi-core CPUs are on a fast track to becoming obsolete.
GPUs are very limited in their flexbility and so would Larrabee have been. Few programs align ideally with the "massive numbers of simple processing units" design and so general purpose CPUs aren't going anywhere anytime soon.
Intel is like any other company: expand or die. I have no doubts that there will be a Larrabee product at some point in the future. Intel can't afford to ignore the massively parallel computing/HPC market.
That is the problem isn't it, expand or die. But Intel is trying to expand into the wrong market. They're trying to build a general purpose GPU, instead of expanding into the 128-bit (super)computing market.
Nvidia inadvertently built a massive array of (about 216/240) stream processors- compute shaders- or more plainly speaking... 128-bit math processors into their GPUs, needed for 3D graphics acceleration. And then they bought Aegia PhysX (physics) engine and incorporated into their GPU, what they ended up with was the start of the next generation of supercomputing. The supercomputer built at the University of Antwerp. 13 NVidia GPU that's 4 times faster then their previous supercomputer cluster consisting of 256 AMD Opterons (512 cores) and costs under 6000 euros. The previous supercomputer cost over a million euros. This is a significant development!
Intel is trying to go head-to-head with the NVidia GPUs when they need to be working on a 128-bit processor that complement GPUs, but from the CPU side.
A 128-bit processor not limited to just math, but also useful for some limited integer operations. A processor with a massive array of 128-bit stream processors to do PhysX or physics calculations for better 3D graphics and visual effects. It would allow Nvidia to focus on their GPU design to optimize for 3D graphics rather than allocating GPU resources to do PhysX (physics) calculations or falling into a more general purpose design that ends up wasting transistor real estate ... note: Nvidia lost their GPU crown (temporarily) to AMD/ATI.
On the other hand, Intel could keep trying to build their GPGPU and neglect their end of the spectrum in the massively parallel computing/HPC market.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
71 Comments
Back to Article
ProDigit - Sunday, December 6, 2009 - link
Intel Cancels Larrabee Retail Products,REASON:
http://news.cnet.com/2300-1001_3-10001951.html?tag...">http://news.cnet.com/2300-1001_3-10001951.html?tag...
???
Cerb - Wednesday, December 9, 2009 - link
That's totally unrelated. That is a test chip for people to mess with, and figure out where the hardware itself should go, and how software needs to be written for future many-core CPUs.The kind of fine-grained complicated multithreading of ages past is simply not easy enough to code to be worth using (hence single-threaded apps everywhere that could be pervasively multithreaded), and can be a hindrance to scaling as much as a help. Beyond maybe a dozen decently powerful cores, there get to be many unknowns, and many possible bottlenecks. On top of that, now that we're scaling out by more cores, there's no reason to stop doing so, even if disruptive technology can get us going faster, again--accepting faster speeds as a matter of course has been an invisible crutch.
This will give research folks the ability to actually mess with code on such a computer, so as to be more ready to make the future CPUs work well. Also, such CPUs, with superior, but still weak, cores, may end up being good server CPUs, in the future.
Larrabee "Prime" is a failure due to Intel doing what Intel does best: pointlessly burning away money, and creating hype. They should have quietly gotten a small group of engineers to make a solid core design (the Isreali guys would be a good bet, following history), then once it was working well internally, gotten other groups involved. Once the 2nd or 3rd generation was good internally, polish it up, release it for niche markets, and move out from there.
Zingam - Sunday, December 6, 2009 - link
I will tell you why! x86 is a dead end and that has been known for decades now. It it only the billions of Intel that keep it afloat yet.Maybe that would make Intel realize that they have to move on and bring something new, something groundbreaking that is ready for the future!
tygrus - Sunday, December 6, 2009 - link
[quote]"Maybe that would make Intel realize that they have to move on and bring something new, something groundbreaking that is ready for the future"[/quote]
They did years ago and created a new "groundbreaking" 64bit ISA which has become a "dead end" and barely afloat .. the Titanic disaster .. Itanium
snarfbot - Sunday, December 6, 2009 - link
kinda like how the power architecture is dead and has been for decades now and its only the billions of ibm that keep it afloat?ProDigit - Saturday, December 5, 2009 - link
It has been foreseen by many engineers that the x86 architecture is not really suited for graphics.There are several reasons, but one of the most striking for me, is that the x86 architecture is not really power efficient.
And with chips that are overclocked (turbo) to a certain thermal value instead of to a fixed value, would mean that the x86 architecture larabee cards used for graphics would be outperformed by the structure or architecture used by ARM.
Graphic cards of today are based on some sort of deviant of ARM architectures.
Still, Larabee's technology might be very interesting in servers for cloud computing, and home PC's, where the motherboard holds place for 4, 6, 8, or 16 cores, that will,and where one can upgrade by just buying CPU cores.
I see a future in Larabee CPU's, not GPU's, unless they go off the x86 structure which they have been holding for years, and which now has become slightly outdated!
PsiAmp - Saturday, December 5, 2009 - link
"Prime running the SGEMM HPC benchmark at 1 TeraFLOP"It has to be FLOPS. 'S' on the end is not representing plural, but 'S' - second.
FLoating point Operations Per Second
spathotan - Saturday, December 5, 2009 - link
Wow...what an INCREDIBLE waste of time and hype.MadBoris - Saturday, December 5, 2009 - link
Vaporware hits again...How's it feel to be hoodwinked?Can you admit you were hoodwinked?
I cannot believe how many tech reporters jumped on Intels hype train.
Amazing that no actual product arrived, and yet all these financial analysts affected stock prices for the last year on vaporware.
You have to be stupider than stupid to think Intel can execute in just a few years what GPU companies like NVIDIA and AMD have been eating sleeping, drinking for 24/7/365.
Hype wins again, and yet a few of us had enough common sense to know the claims of dethroning discrete GPU's was impossible on any first several gen products.
I'm actually rather disappointed with the tech sites that bought into the Intel hype, it's not the first time. It showed peoples lack of common sense and experience in the field, and how easily they are still swayed by PR.
High end 3D gaming Graphics is kind of becoming a dying business on PC unfortunately due to console ports. As someone who upgraded GPU's almost every cycle beginning from monster 3d, I now no longer can justify upgrading until the next console comes out. GPU business is ripe for change and will require it but Intel has alot yet to learn.
justonce - Saturday, December 5, 2009 - link
Somewhere they are laughing at Intel's failure!kmmatney - Saturday, December 5, 2009 - link
Thanks for that. The Voodoo 5 6000 guys are laughing too.piesquared - Saturday, December 5, 2009 - link
The concept seems so simple and looks much more efficient. A top of the line CPU on one side, a top of the line GPU on the other and a Fusion APU in the middle. They just created a new market and have the industry backing it. Should prove to work out pretty well for their future...Spike Africa - Saturday, December 5, 2009 - link
"The software side leaves us a bit more curious, as Intel normally has a strong track record here"On the contrary, Intel historically has been terrible at software. They killed their UNIX expertise just as Linux started, Itanium was proof they don't understand software at any level and they wrongly opposed a move to 64-bit software on x86, losing billions in share to AMD. No surprise they are last to the parallel software party.
The people now in charge at Intel got on the launching pad to the executive suite in the early 90's. That was a time when the chip guys at Intel were doing all they could to "control" the software guys. The chip guys didn't understand software and were threatened by it. They wanted to leave all the software side to Microsoft while Intel focused on line-width and manufacturing. The chip guys won and people like Paul Otellini (nice guy, but clueless about software) took the helm. That's why software is a side show at Intel, why they learned all the wrong lessons from Itanium and why over ten years later when for the first time just turning the crank on x86 won't cut it and they really need a new software paradigm for Larrabee they don't know to do it.
NVIDIA does. They understand the central role software plays in GPGPU. Look at recent events like GTC and the big news is not so much Fermi as it is the presence of numerous software teams who have been working with NVIDIA GPGPU for three years. Fermi in key part is based on the lessons of those software experiences, expertly done by NVIDIA based on experience, not guesswork.
Intel has also been weak at working with paradigm-breakers because they can't get out of this "I'm Intel and you're dirt" mentality they have. Give them advice and they feel threatened. NVIDIA doesn't have that problem. Some of the most advanced work shown at GTC came out of companies nobody every heard of before, to NVIDIA's benefit.
Orangutan2 - Sunday, December 6, 2009 - link
For me a lot of the things you say ring true. I would love to see nVida come out with a breakthrough Fermi.I'll buy Fermi coz I'll use it with Badaboom but I need it to be fast for games or my friends will take the piss.
Intel has really held things back over the (last 20) years so competition all the way.
Companies only work as hard as they have to.
WaltC - Saturday, December 5, 2009 - link
"To Intel’s credit, even if Larrabee Prime will never see the light of day as a retail product, it has been turning in some impressive numbers at trade shows."I think this was probably the main reason Intel decided to go ahead and cancel the Larrabee hardware. The tFlop benchmark numbers and the "real-time ray tracing" demos were creating massively unreasonable expectations in the public mind, expectations that Larrabee would not be able to fulfill.
IMO, "Larrabee" has always been heavy on the PR side and very light on the hardware/software side. Intel oversold the concept to the degree that they were not able to actually make a product that would live up to the promises made and the capabilities implied. I think Intel wrung every last bit of PR promotion out of the Larrabee concept before it officially "killed" it. Much better that Intel cancel Larrabee now than go ahead and try to market it and watch it fail catastrophically.
HighTech4US - Saturday, December 5, 2009 - link
Why are you giving credit to intel for a hidden hardware demo(s) where it was even stated that they were overclocking the hardware.Do you not remember the Pentium 4 that was shown clocked at 4 ghz at an intel IDF. It NEVER was released.
Until a part is available for independent review treat all intel trade show numbers with great skepticism.
WaltC - Saturday, December 5, 2009 - link
Agreed...but, that's exactly what I said in the first place...;) The quote in my original post is from the AnandTech article.Ryun - Saturday, December 5, 2009 - link
This was a smart move for Intel, and one that I did not expect. I figured they were going to release it come hell or high water like when Intel pushed Netburst.Ryun - Saturday, December 5, 2009 - link
*edit* Whoops meant Itanium (not sure how I messed that one up)bupkus - Saturday, December 5, 2009 - link
as nobody else has thrown this in, I will.Will Intel try to buy nVidia as AMD did ATI?
mutarasector - Saturday, December 12, 2009 - link
Personally, I'd rather see Nvidia and AMD join forces somehow and merge their high end HPC/GPGPU technologies into a cohesive seamless technology, and head Intel off at the pass. This would likely never happen for several reasons, the first being that it probably wouldn't pass the SEC and FTC regulatory smell tests. But I'm just *salivating* at the possibilities of the two doing a joint venture. One thing I do know: Nvidia will have to do something -- and real soon. They aren't just getting squeezed out by Intel and ATI not needing their chipsets, but then there is Lucid's Hydra chip which may eventually render both Crossfire and SLI *moot*. Off course, this would affect ATI, but could really hurt Nvidia far more.Merging with AMD would solve their lack of a "ilicense" agreement.
Another possibility might be Nvidia buying out VIA, but to what end, I'm not really certain, and would be too little too late.
jconan - Sunday, December 13, 2009 - link
AMD did try to buy out Nvidia but Nvidia requested that they be CEO of the new company and hence AMD moved on to ATI... Nvidia could however combine ARM cores with their GPUs like they did similarly with TEGRA using a different type of GPU. This way Nvidia can still forge ahead with a serial and a parallel processor in one and not have to rely on x86 for HPC.lifeblood - Sunday, December 6, 2009 - link
Intel may or may not be too proud to do it, but Huang is way to pigheaded to accept it.Olen Ahkcre - Friday, January 8, 2010 - link
The reason for Intel not buying Nvidia is it would probably run into problems with anti-trust laws.I seriously don't think the government would allow the number 1 CPU maker and number 1 GPU maker to merge for the reason being they would become too dominant. With the combined technology and engineering assets under one roof, AMD/ATI would not be able to compete.
You might argue Nvidia isn't currently #1 in the GPU arena, but it's just a formality as Fermi will allow Nvidia to retake the #1 spot. Nvidia gave up their lead temporarily to improve on general purpose GPU support.
Gunbuster - Saturday, December 5, 2009 - link
This is no surprise. With all their R&D money they cant even make a passable driver and control panel for the current integrated "extreme" video chipsets. Thats not even touching on the chips abysmal performance. What revision is this now? 4? each spin touted as "even more extreem" than the last yet all you can run is Unreal 1 at 800X600...eddieroolz - Sunday, December 6, 2009 - link
Unreal at 800X600, you're lucky! I can't even run Halo at 640X480 on this GM45HD.chizow - Saturday, December 5, 2009 - link
This news basically validates all of the various criticisms leveled at Larrabee over the last few years: that Larrabee wouldnt' be a competent raster GPU, too much die space was wasted on x86 instructions, and Intel was spending far too much effort and R&D on peripheral features, like ray-tracing, instead of focusing efforts on making the part competitive.It also helps confirm Intel was never really developing Larrabee with discrete graphis in mind, it was always a reactionary response to Nvidia's GPGPU and HPC efforts. I imagine that's why Intel is keeping the project alive. As it is now, same as it was a few months and years ago, Larrabee serves no purpose as it was highly unlikely to serve as a competent discrete 3D raster-based GPU and is actually a conflicting interest for Intel in the only area it might've really been competitive, as a highly parallel x86-based CPU for HPC.
cyberserf - Saturday, December 5, 2009 - link
they were also making something in those days that would outclass the competition. look how that turned out.IdaGno - Saturday, December 5, 2009 - link
IOW, nVidia is closer to producing their version of an x86 clone than Intel is to producing a viable gaming GPU, integrated or otherwise.Kiijibari - Saturday, December 5, 2009 - link
The picture of your so-called "Larrabee Prime" is clearly a Core2 Quad Processor code name "Yorkfield", here is your source:http://www.intel.com/pressroom/kits/45nm/photos.ht...">http://www.intel.com/pressroom/kits/45nm/photos.ht...
http://download.intel.com/pressroom/kits/45nm/45nm...">http://download.intel.com/pressroom/kits/45nm/45nm...
With an error like this, I doubt now the overall quality of the article. Probably, the only thing that is going to be canceled are the old S775 QuadCores ;-)
Ryan Smith - Saturday, December 5, 2009 - link
I was in a rush and grabbed that image from our Larrabee deep dive article, believing it to be a short of Larrabee. Clearly that's wrong, and I've pulled the image. Thanks for catching that.Mike1111 - Saturday, December 5, 2009 - link
I wonder what that means for 2012's Haswell architecture. It was supposed to unite the CPU with Larrabee extensions (successor to the on-die non-x86 GPU core). If Larrabee isn't ready until 2012 that could be a big problem.Elementalism - Saturday, December 5, 2009 - link
A decade later, same poor project management over a compiler driven arch.sprockkets - Friday, December 4, 2009 - link
vaporwareStevoLincolnite - Friday, December 4, 2009 - link
[quote]and Intel is still hard at work developing their first discrete GPU.[/quote]Just to correct an error in this Article that so many people and tech websites seem to make the error on.
Larrabee would not have been Intels first Discreet GPU, they produced the Intel i740 which was there first Discreet GPU almost a decade ago, which used the AGP bus, and was then the basis for the Intel Extreme based Integrated Graphics Processors for years after that.
Olen Ahkcre - Friday, January 8, 2010 - link
... Larabee would've been Intel's first "General Purpose" GPU.cocoviper - Saturday, December 5, 2009 - link
The i740 wasn't a GPU. The first GPU was the Geforce 1 which integrated T&L hardware with the traditional fixed rendering pipeline.The i740 was part of the TNT / Voodoo 2 generation that lacked T&L hardware and was thus simply a basic 2D/3D renderer, not a GPU.
Spoelie - Sunday, December 6, 2009 - link
GPU is a term coined by NVIDIA to differentiate itself, but that just stuck around to encompass every graphic chip. It's silly to argue over its semantics.Current GPUs do more than graphics, and ATi's chips should be called VPUs, but we all keep calling them GPUs anyway.
sfc - Saturday, December 5, 2009 - link
Larrabee IS the first GPU Intel has created in-house. Intel didn't design the i740. That was a joint venture with Real3D. They essentially provided the manufacturing capabilities while Real3D provided the engineering and design.
Ryan Smith - Friday, December 4, 2009 - link
Bear in mind that i740 wasn't designed completely in-house; it was a joint project with Real3D/Lockheed. Larrabee is the first discrete GPU that Intel is designing entirely in-house.jconan - Sunday, December 6, 2009 - link
but Intel owns REAL3D and they still haven't improved on the graphicsStevoLincolnite - Friday, December 4, 2009 - link
I'm aware of that, but it was still badged as an Intel Graphics part and thus there first discreet Graphics card. - Glad you made the correction to the article however.- Friday, December 4, 2009 - link
Larabee is a huge undertaking in x86 speak -to balance speed, low wattage, and tuned software was simply too much to do- if possible at all. Now Intel will have to leave it to the experts...does this mean Intel will be buying IP licenses from AMD? Or does this eye-opening occurrence bring the industry closer together on things like standards?As I have said before, AMD has the blueprints and ideas for their next-gen products, and they would be dumb not to share an idea or two if it were to help set de facto standards for next-gen HPC/GPGPU’s ;that spirit could possibly be evident in the (light) 1.5 billion they agreed on. We'll see what happens
asH
AnandThenMan - Friday, December 4, 2009 - link
Intel already has access to at least some of ATI's patents due to the new agreement between the two companies. But having access to patents is hardly a gateway into making a great GPU, you still have to make build the thing, and more importantly you have to support it on the software side. Both Nvidia and ATI have been doing drivers for so long, it's a well oiled machine. Jumping into the fray from scratch is incredibly challenging.As for Intel making a discreet part before, the i740 was too slow when it came out, a big reason it never gained a foothold. Sadly, some of the tech still lives on in Intel's GMA, the worst thing ever to happen to graphics.
mutarasector - Saturday, December 12, 2009 - link
"Intel already has access to at least some of ATI's patents due to the new agreement between the two companies"It's my understanding that the cross licensing agreements between AMD/ATI and Intel do not cover each other's respective GPGPU technologies, only the x86 side of things and extensions.
dagamer34 - Friday, December 4, 2009 - link
Why would cross patent licensing between AMD and Intel include any ATI intellectual property? Disputes between the two companies have always been about CPUs, not GPUs.AnandThenMan - Saturday, December 5, 2009 - link
Because it does? According to Dirk Meyer's comments, the new agreement as part of Intel's cash payout to AMD includes renewed cross licensing that includes ATI GPU tech.What we don't know is the actual scope of what is included, but at least some is.
qcmadness - Friday, December 4, 2009 - link
The crappiest is that T&L support started from GMA X3000 (G965 variants).Virtually software shaders from virtually all GMA IGPs.
StevoLincolnite - Friday, December 4, 2009 - link
No, the crappiest was the S3 Savage 2000, Intels TnL implementation actually works, S3's wouldn't function, and if you could get it to function you would get massive amounts of graphic anomalies.However S3's IGP's are far superior to Intels these days as they are based on the S3 Chrome chip.
Intels main issue is the drivers, they plainly suck, they should take a page out of nVidia and ATI's driver development work and implement a similar strategy.
The Intel x3xxx series had massively varied performance, Direct X 9 performs poorly, Direct X 10 even though the X3100 supports it, will never run with acceptable image quality and thus performance. (If the game ever decides to work), and Direct X 7 and 8 based games, even if they are over 15 years old still perform poorly.
Hence the term "Intel Decelerators". - Personally I would prefer a GMA 950 over the x3000/x3100/x3500 simply because it's faster, and can be overclocked with GMA Booster, despite having a lacking feature set.
Until Intel get there act together concerning there IGP's and drivers for them, I will never take them seriously for a discreet part. - As -every- single piece of Graphics hardware to come from that company has simply sucked.
rs1 - Friday, December 4, 2009 - link
I've always been fairly skeptical of Intel's ability to successfully jump in to the GPU market. There's a reason why AMD purchased ATI instead of trying to roll its own GPU's, and there's a reason why nvidia doesn't make CPU's. A world of difference exists between the two, and just because a company has a solid CPU engineering pipeline, it doesn't mean that the same resources can be leveraged to make a solid GPU.In any case, I think this is a big win for AMD's Fusion architecture. With the failure of Larrabee it now seems very likely that they will be the first/only company to be offering an integrated CPU/GPU combo, or at least the first company to be offering a *decent* one. Purchasing ATI has already proven to be a good move for AMD, and the demise of Larrabee makes it an even better one.
AnandThenMan - Friday, December 4, 2009 - link
Which is quite surprising in retrospect. When AMD purchased ATI, it was heralded by many as one of the worst business deals of all time. Too expensive, ATI tech is not good enough, too much money to pay just to be able to offer a complete platform. (many believed AMD would outright give up doing high end discreet parts)Yes Intel will be first with a CPU/GPU hybrid, but marrying such a terrible GPU to an excellent CPU yields a very unbalanced and strange piece of silicon. Sure, Intel could potentially improve their GPU, but how many years have we been hearing that was going to happen? Performance is still incredibly lackluster. Wasn't the whole idea of Larrabee in the first place? Get close to the class leading performance, or at least in the same universe.
Ironic how AMD ends being the only one out there with the tech to make a balanced Fusion type product.
mutarasector - Saturday, December 12, 2009 - link
"Which is quite surprising in retrospect. When AMD purchased ATI, it was heralded by many as one of the worst business deals of all time. Too expensive, ATI tech is not good enough, too much money to pay just to be able to offer a complete platform"I remember the criticisms of AMD's aquisition of ATI very well. At the time, I too was a bit concerned - not because it was a bad idea for AMD to buy ATI, but it was a *bad time* to do so considering the legal entanglement with their suit against Intel. While AMD needed to become a complete platform vendor, the expense of chewing two big bites (aquisition + Intel suit) darn near choked AMD.
Strangely enough, the economy tanking may have actually worked to AMD's advantage with regard to the ATI aquisition because it forced AMD to focus on the mid level and low end market segments well ahead of Intel -- essentially beating a 'tactical retreat' and digging in there. ATI, the very thing that almost *choked* AMD is now the very corporate asset that is keeping AMD competitive with just enough cash flow/revenue stream to be able to withstand the losses AMD has had to suffer for as long as they have. Roll the clock ahead a few years, and a nice healthy settlement with Intel, and now there is hope for AMD to actually be able to work back to being competitive in the high-end market segment again - especially with the in-house manufacture limitation *gone* (allowing AMD to go completely fabless).
mutarasector - Saturday, December 12, 2009 - link
"Which is quite surprising in retrospect. When AMD purchased ATI, it was heralded by many as one of the worst business deals of all time. Too expensive, ATI tech is not good enough, too much money to pay just to be able to offer a complete platform"I remember the criticisms of AMD's aquisition of ATI very well. At the time, I too was a bit concerned - not because it was a bad idea for AMD to buy ATI, but it was a *bad time* to do so considering the legal entanglement with their suit against Intel. While AMD needed to become a complete platform vendor, the expense of chewing two big bites (aquisition + Intel suit) darn near choked AMD.
Strangely enough, the economy tanking may have actually worked to AMD's advantage with regard to the ATI aquisition because it forced AMD to focus on the mid level and low end market segments well ahead of Intel -- essentially beating a 'tactical retreat' and digging in there. ATI, the very thing that almost *choked* AMD is now the very corporate asset that is keeping AMD competitive with just enough cash flow/revenue stream to be able to withstand the losses AMD has had to suffer for as long as they have. Roll the clock ahead a few years, and a nice healthy settlement with Intel, and now there is hope for AMD to actually be able to work back to being competitive in the high-end market segment again - especially with the in-house manufacture limitation *gone* (allowing AMD to go completely fabless).
Technium - Saturday, December 5, 2009 - link
Integrating Intel's GPU into Sandy Bridge is aimed mostly towards the low end (very low end) segment where FPS in Crysis are meaningless. Practically all the enterprise market and quite a lot of the retail market (desktop/laptop) can settle for a cheap GPU that can do anything but play new games, take zero space and very little power. This will also be the first time an integrated GPU is manufactured on the same process (32nm) as the CPU. Sandy Bridge's successor "Ivy Bridge" will be a 22nm part with even lower power and better performance.My work PC is a laptop with Intel graphics and it does the job - connect OK to various screens and projectors.
The rant about the performance of the integrated GPUs is ridiculous - for proper game play you must buy a 100W or more GPU. You'll never ever get the same performance from a 5W part during the same generation. Since the game companies always aim towards the mid-high end segment, my statement holds.
eddieroolz - Sunday, December 6, 2009 - link
I disagree about your assertion that the performance of IGP not mattering in the low end segment.My laptop, an GMA4500 HP, struggles with heavy flash. The reason partially lies in the weak laptop CPU, but a lot of the lag does have to do with the crappy Intel IGP.
Elsewhere, we have the older X3100-based Macbooks which struggle even to play YouTube in standard definiton, or run a Java game at full speed. Again, those things are slowed down by the Intel solutions, as on a Ion-based system it wouldn't happen.
HD-movies also will benefit from beefier graphical solutions - just look at the difference between Ion and GMA4500.
rs1 - Sunday, December 6, 2009 - link
Exactly, gaming performance is no longer the only reason to want a decent GPU in your system. As Blu-ray drives and HD content continue to become more widespread, and as more applications start going the GPU accelerated route like Flash, the performance of the GPU is going to become more important, even for users who never play a demanding 3d game.As I said above, the failure of Larrabee leaves AMD in a relatively strong position for the next couple of years. The only question is whether or not they'll be able to execute on it.
AnandThenMan - Friday, December 4, 2009 - link
This writeup has some odd assumptions. You are saying that the architecture itself is sound, but there is some hardware problem, presumably Intel cannot pack enough cores using current fab tech to make it competitive in traditional graphics?Well if that's so, then the architecture is NOT sound. Waiting for a better process node is pointless, because your competition will move to a new one as well, combined with much better performance. If the architecture was truly solid, then it would be a competitive part if it was made today. Waiting to make the hardware viable and competitive is a losing battle because your competition never sits still, which is exactly what has happened to Larrabee as it stand today.
Larrabee is just not a good idea for traditional graphics rendering.
Olen Ahkcre - Saturday, January 9, 2010 - link
"Devil is in the details..."Larabee looks good on paper, but working out the details is much, MUCH more problematic.
Which general means something is overlooked or the design is more complicated than it needs to be or they're taking the wrong approach to the problem.
In the case of Larabee concept, I think Intel is taking the wrong approach to problem.
GPGPU is in fact not about "cloud-computing" or stream-processors or compute-shaders... GPGPU is in fact largely a misnomer, (a red herring, if you will) which often causes a problem of aiming for the target.
What Intel or AMD or whoever wants to get ahead in next stage of the computer evolution is to build a faster floating-point processor. Which is EXACTLY what Nvidia has done... inadvertently, because the requirements for building a super-fast 3D accelerator requires huge amounts of floating calculations... and when Nvidia bought Aegia PhysX they stumbled on yet another piece of the next computer evolutionary stage, software/hardware physics engine... perfect for doing simulations scientific and otherwise.
Think about what is common in stream-processors and compute-shaders and supercomputing... number crunching... FLOPS (floating-point operation per second).
To put it in the simplest terms what they're aiming for is a massive floating-point processing unit array...
Why Intel shouldn't try to build a GPGPU is, in my opinion this...
They became too dominant in the CPU arena to the point of overlooking THE one major area that they could have excelled the field of supercomputers.... floating point processing and they lost sight of that target when CPUs for personal computers surpassed CPUs designed for super-computers and laps into complacency.
Floating-point calculations... as far as I know, the SSE (Streaming SIMD units) on multicore processors aren't coordinated to operate as a single unit... something for AMD and Intel to look into. The area might be worth looking into for various reasons I won't go into, yet.
The approach one I would recommend is looking to coordinating the SSE units on multicore CPUs from the software stand point and work back to how to improve the hardware design in the CPU for more efficient operating conditions.
The area of focus I would recommend (if I was making the choice) would be Blu-ray (1920x1080 resolution, H.264) decoding. With a the combination of 4-series integrated graphics device and a dual-core processor, there should be enough processing and memory bandwidth to play back Blu-ray movies without dropping any frames. At least that's my opinion anyways.
Why focus on Blu-ray... Blu-ray requires lots of computing power... and it's a more popular format than DVD (or it will be and why someone should start working on a solution now rather than later). Blu-ray movies don't play back all that well on a lot of laptops I've tested.
Another reason for Intel to work this out is Nvidia looks like they're going to take another chunk and/or create a new market segment... set top box sized computers using another flavor of their general-purpose GPU to play-back Blu-ray using dual-core ARM processors.
The area they've overlooked is how to improve computing power... their answer is just add more cores and hyperthreading.... all the while neglecting the SSE units, which could be utilized better.
I mean the whole processor idea could use some serious rethinking.
Set-top box media computers all the way up to supercomputers.
Various types of servers for example don't need floating-point processors at all... (file, print, search engines, databases, etc.)
Logistical analysis (like chess) doesn't require floating-pointing processors...
Intel needs to work out how to divide up the transistor real estate better...
System-on-a-Chip (SOC) up to discreet integer/floating-point units for supercomputing needs.
The number of ... aaahhh-hhhaaa!!!
Sorry for rambling on and on... I need to work out some of the details...
but what I see is several different flavors of overlapping CPU designs... targeted for various market segments...
Still working out the details...
The ideas are coming faster than I can sort out in my head much less type...
Well that's it for now... just some ideas to throw out there if anyone is interested.
qcmadness - Friday, December 4, 2009 - link
The 1024-bit ringbus could be the problem.Remember when ATi put a ringbus in R520 and removed it in RV770?
I don't think chip-wide cache coherency is good either.
Sahrin - Friday, December 4, 2009 - link
This is exactly what I expected after that weird out-of-the-blue announcement of the suspiciously Larrabee-like 'cloud CPU' with '48' 'very simple x86' cores (read: mesh of 48 in-order x86 CPU's)Too bad, the graphics market could use the competition (especially since nVidia seems prepared to cede control of the market to ATI).
I wonder if this has anything to do with the anti-trust (that is - probably not a good idea to invest a bunch of money entering a new market place when you're about to get fined billions of dollars and may be told at the end of a long an expensive development project that you can't release the product due to anti-trust sanctions)?
Olen Ahkcre - Friday, January 8, 2010 - link
Larabee being cancelled and that ATI being in the number 1 spot shows that the GPU market is in fact extremely competitive right now.Spoelie - Saturday, December 5, 2009 - link
I'm still more partial for a software problem. The article may state: "The software side leaves us a bit more curious, as Intel normally has a strong track record here", but that's only in relation to the purely X86 ecosystem.Given Intel's GPU track record however, that statement is not true at all. Their integrated graphics have underachieving for almost their entire existence - lot's of promises made ("feature x in next driver revistion", ...) but little delivered.
aj28 - Saturday, December 5, 2009 - link
Hmm... I suppose this does tend to imply that the 48-core chip was just Larrabee de-named. Any official confirmation on this though?brooksmoses - Saturday, December 5, 2009 - link
I don't think this is the same -- the topology of the dies seems to be very different.In particular, all the schematics of Larrabee show the cores clustered around a linear ring bus, and this one photo of the die appears to match that:
http://www.techtickerblog.com/2009/05/15/intel-sho...">http://www.techtickerblog.com/2009/05/15/intel-sho...
On the other hand, this new 48-core chip is very much a tiled architecture, and I don't see any evidence of a similar ring bus:
http://asia.cnet.com/crave/2009/12/03/intel-debuts...">http://asia.cnet.com/crave/2009/12/03/i...-that-co...
Moreover, there's no mention of the new extra-wide vector instruction set from Larrabee in the new 48-core chip, nor would that really be especially useful in its intended "cloud-computing" target market.
MonkeyPaw - Friday, December 4, 2009 - link
I even wonder if Larrabee will have a successor, since it sounds like the initial product is a pretty major disappointment. It's easier for Intel to say the product is not totally dead, but an announcement like this just keeps investors looking to the future. After all, Intel has a very long way to go in a segment of the market where they have totally sucked for a long while. Could Larrabee become another Itanic, where Intel's dream is to take over a segment with their own architecture, only to have it become a niche product?ssj4Gogeta - Saturday, December 5, 2009 - link
Intel NEEDS a GPU. AMD is going to integrate the GPU on the processor die and eventually move all the fp work to the GPU. How will Intel's CPU's compete with a TFLOP of compute power?Olen Ahkcre - Friday, January 8, 2010 - link
If you mean Intel needs to build their own GPGPU, I think you're wrong (my opinion). There are already two very successful companies who already make better GPGPUs than Intel... namely NVidia... ATI/AMD, a distant second. But even AMD/ATI is doing better than Intel, otherwise they won't have canceled Larabee (my opinion). And building their own GPGPU they've overlooked something critical... something very critical. The way Intel and AMD are handling the direction of CPUs... they're just adding more and more cores and perhaps wasting valuable transistor real estate that could be put to better use like more cache memory.They're shifting attention away from where they should be focusing... improving the overall system design of the computer. By improving I don't mean adding more cores to the CPU, but rethinking the CPU/GPGPU relationship and redesigning the whole computer system. They need as they call it another "paradigm-shift".
And integrating a GPU to the CPU isn't really a good idea. You get unnecessary design complexity and end up with a lower performing CPU/graphics design... in engineering textbook this would be a no-no unless you are going towards perhaps a low-transistor/low-power design, which won't hold a candle to the high-end discreet CPU/GPU design.
How will Intel's CPUs compete with a TFLOP(S) of compute power?
Easy... They need to rethink the CPU and GPU... TFLOPS... tera (trillion) floating-point calculations per second... where does it come from... look underneath the hood and you find Stream processors... and what do all Stream processors have in common... they do floating point math...
128-bit floating point math.... There used to be a time when CPU had a discrete math processor call the FPU (80-bit floating-point processing unit), but rather than create multi-core FPUs (because the concept/technology/design methodology and basic knowledge to do something that wasn't available at the time) they integrated the FPU into the CPU (irony).
The reason for the reemergence of floating-point processing is because they are HEAVILY utilized in 3D graphics and their use is universal in computing, especially simulations.
The only thing they don't do is integer operations and this is the part where the CPU design engineers need to look into... creating a discrete part with a massive array of SSE ("Stream"ing SIMD extension) units with some limited integer operations... maybe.
The CPU might be stripped down of unnecessary components and instead integrate with the chipset to become a super fast systems I/O manager... managing I/O, thread manager/scheduler, system watchdog, etc., etc. They've already moved in that direction by incorporating the memory controller hub into the CPU.
There are some details I have left out since it's beyond the scope of this forum/venue.
But it is my belief that traditional multi-core CPUs are on a fast track to becoming obsolete.
swaaye - Sunday, December 6, 2009 - link
GPUs are very limited in their flexbility and so would Larrabee have been. Few programs align ideally with the "massive numbers of simple processing units" design and so general purpose CPUs aren't going anywhere anytime soon.qcmadness - Friday, December 4, 2009 - link
Larrabee is for both HPC (GPGPU) and GPU market.NVIDIA is aggressively entering the HPC market with Fermi which could harm Intel's own x86 server market.
Both Intel and AMD are touting heterogeneous computing in the distant future.
Ryan Smith - Friday, December 4, 2009 - link
Intel is like any other company: expand or die. I have no doubts that there will be a Larrabee product at some point in the future. Intel can't afford to ignore the massively parallel computing/HPC market.Olen Ahkcre - Friday, January 8, 2010 - link
That is the problem isn't it, expand or die. But Intel is trying to expand into the wrong market. They're trying to build a general purpose GPU, instead of expanding into the 128-bit (super)computing market.Nvidia inadvertently built a massive array of (about 216/240) stream processors- compute shaders- or more plainly speaking... 128-bit math processors into their GPUs, needed for 3D graphics acceleration. And then they bought Aegia PhysX (physics) engine and incorporated into their GPU, what they ended up with was the start of the next generation of supercomputing. The supercomputer built at the University of Antwerp. 13 NVidia GPU that's 4 times faster then their previous supercomputer cluster consisting of 256 AMD Opterons (512 cores) and costs under 6000 euros. The previous supercomputer cost over a million euros. This is a significant development!
Intel is trying to go head-to-head with the NVidia GPUs when they need to be working on a 128-bit processor that complement GPUs, but from the CPU side.
A 128-bit processor not limited to just math, but also useful for some limited integer operations. A processor with a massive array of 128-bit stream processors to do PhysX or physics calculations for better 3D graphics and visual effects. It would allow Nvidia to focus on their GPU design to optimize for 3D graphics rather than allocating GPU resources to do PhysX (physics) calculations or falling into a more general purpose design that ends up wasting transistor real estate ... note: Nvidia lost their GPU crown (temporarily) to AMD/ATI.
On the other hand, Intel could keep trying to build their GPGPU and neglect their end of the spectrum in the massively parallel computing/HPC market.