it´s all so boring..... i wait for 8 or 12 core desktop systems that can be overlocked and don´t run at 140watt.
beside my day job i am a freelance 3d artist and i need all the computing power i can get. my 5280k gest a bit long in the thooth.. well it coudl be faster from the start.
yet intels makes baby steps when it comes to computing power. for me they could get rid of the GPU part and spend that die size to give me a faster CPU.
I hear that. I have a 6-core on X58 and ... nope nothing interesting for me yet. any new 6-core is on a dying socket, or half-step behind. no sense in "upgrading" to a 4-core with "a much better but still sucky compared to discrete graphics" part. and now that intel wants to lock xeon to server chipsets.... meh.
You're asking for the impossible. I'm fine with the power bill of a single machine if it gives me king of the hill single threaded performance, 4 very fast memory channels, lots of PCIe lanes to play with, and lots of cores for multithreaded applications. I have a 48 thread system but it struggles to host a modded MC server because 2.7 GHz IVB doesn't cut it for single threaded performance.
Impossible? Idk, wasn't cannonlake-s rumored to bring 8 cores to the 95W lineup? 10nm seems like a good time for a gun-shy Intel to finally bump core count.
And Intel is already increasing core count on the Broadwell-E to 10, right? That would queue up Skylake-E to hypothetically debut 12 cores at around the same time as the release of cannonlake-s.
I doubt that you're going to see 50+ % more cores at the same frequency in the same power envelope. Node shrinks decrease power use, but not by *that* much.
If at all intel will up cores maybe 6 but sure not to eight in cannonlake. I believe it when I see it. Fact is most users are fine with dual-cores. If people needed processing power there would not be a tablet hype. And even the worst big-core celeron parts run circles around tablet CPUs.
Broadwell-E 10 Core will as leaks say slot in above the existing 8-core at $1500. So nothing changes at all.
Far from impossible. If AMD can make an 8-core 4GHz desktop CPU that uses 200w, Intel should be able to do the same at 100w. They're just lazy. Because they can be.
I've always been a bit disappointed in the huge die area going to a GPU that I have zero intention of ever using, but I also know I'm not an average consumer. I guess that's what the enthusiast -E lines are for. They don't have an iGPU and they have more CPU cores. If you want even more cores, there's a pile of Xeons for the right price.
Do I wish they'd offer me more cores for less money? Of course. There just aren't enough people like me to make Intel offer such a thing. They'd just see it as cannibalizing high-margin Xeon sales.
If it weren't for NVidias atrocious linux drivers (although the Intel drivers for Skylake also suck big time) I'd be using my dGPU for everything. As such I'm one of the few people who actually use the iGPU in a high end skylake...
@nils_: "They do have problems with the GTX 900 series ..."
I've noticed this as well. They are workable, but hopefully they get this straightened out soon. It is a spec of dirt on an otherwise (relatively) clean record for their binary drivers.
@nils_: "... and tend to pretty much break with every new kernel release."
This is a function of the drivers being closed source binary drivers. nVidia has to recompile the driver for you every time the kernel is updated.
Yeah, the drivers being closed is part of the problem. But as it stands I can use the iGPU fine for Linux work and dual boot into Windows using the NVidia card for gaming. Its probably more energy efficient as well, if only there was a way to complete disable the Nvidia card in Linux...
The binary drivers are the best for linux. It is why you don't see a single AMD GPU in any SteamMachine linux system. The only people who think they have atrocious linux drivers are the people using the open source driver or expecting the same performance as on Windows (it won't be until more people use linux).
@Fallen Kell: "The binary drivers are the best for linux."
I'd edit that to the best for Linux "gaming". If you are looking for a driver that doesn't make you update with every kernel update, then the binaries aren't for you.
@Fallen Kell: "It is why you don't see a single AMD GPU in any SteamMachine linux system."
Pretty much it.
nVidia binary driver >> ATi binary driver > ATi open source driver >> nVidia open source driver
Where ATi's Open Source Driver is approaching the quality of their proprietary binary driver, nVidia has chosen not to put much support into the open source driver and create a higher quality binary driver.
The problem with iGPUs is that they arent really integrated. To make use of them outside of graphics you need the software programmed for this and also need to install according drivers/runtimes (OpenCl for example). It's a mess and relying on software to do this is a crappy idea. The chips itself should be able to assign tasks to the GPU. Hell why not create a new instruction for this? (so I'm doubtful that actually works). Still my point holds, CPU and iGPU must be further integrated.
What would the difference be between having additional instructions that software has to implement and having software written in OpenCL? For that matter, OpenCL code can run happily on a CPU, too, so that means all code should just be OepnCL, right?
The thing is GPUs are good at running one kind of code: simple massively parallel tasks. The thing is most code isn't like that because most problems aren't like that. Branchy single threaded code can't make any use of a GPU, and that's what most code is. Nine women can't make one baby in a month.
@icrf: "What would the difference be between having additional instructions that software has to implement and having software written in OpenCL?"
I think you missed his point. He is saying that OpenCL (and DirectCompute et. al) is a software + driver/runtime solution. Rather than have a solution where code is interpreted through a software layer/driver, he'd rather see a GPU ISA for direct use of instruction on the GPU.
Their is a performance advantage to be had when you remove all those extra cycles interpreting commands. The code would also be much more predictable in the sense that you would know exactly what is being executed beforehand.
On the flip side, an interpreter affords the hardware manufacturer more flexibility in implementation. Also, given the wildly differing compute implementations on the market, it would be very difficult to make an ISA that could be widely used. (Don't forget that this ISA could very easily be used on GPUs as a coprocessor addon.) You would be in the difficult position of choosing which architecture to force everyone else to conform to. If you go with the most widely used, then you force nVidia and ATi to scale back to match Intel's IGP compute capability. If you choose the most widely used discrete option, then you loose out on the double precision capabilities present in nVidia's upper end Kepler and Fermi architectures as well as a huge chunk of ATi's GCN capabilities. Maxwell (900 series) removed a lot of double precision compute capability to get more gaming relevant single precision capability. If you decide to use the most mature architecture at this point, then nVidia and Intel are forced to make potentially huge changes to their architecture to support a metric crap ton (OK, I'm exaggerating, but they still wouldn't like it) of HSA / GCN capabilities that may or may not get used in practice.
@beginner99: "The problem with iGPUs is that they arent really integrated. ... The chips itself should be able to assign tasks to the GPU. Hell why not create a new instruction for this? ... CPU and iGPU must be further integrated."
Sounds exactly like AMD when they first started talking fusion. You might look into HSA and its current implementation in Carrizo. Still more work to do, but the bigger issue is just getting developers to program for it.
I'm in the opposite camp as you. I dislike feeling compelled to add an entire extra PCB just for graphics processing that I need only infrequently to get maybe the missing 25% of performance that iGPUs don't currently offer. I'd much prefer Intel leaving core count at 2-4 low wattage cores paired up with one of their 72EU + eDRAM GPUs in a sub-15 watt package. Discrete cards were fine a decade ago, but Intel's been doing fantastic at bringing integrated graphics up in performance since the GMA 950 was released (that thing was quite impressive in its day and I was thrilled to kick the GeForce HairDryer GT out of my desktop after upgrading to an early C2D chip and mommyboard). In fact, the proof of Intel's ability to chew up the graphics stack is in the fact that NV isn't even releasing GPUs under the 900 series. If I'm not mistaken, the lowest end 900 they offer is the GTX 950 in desktops.
I personally would love to see Intel put more die space into graphics so I can finally get a reasonably sized PC that doesn't need to be bothered with expansion slots that are stuffed with absurdly huge graphics cards festooned with idiotically dressed women with deluded body proportions and sucks up 150+ watts of power that is mostly turned into waste heat rather than accomplishing something useful.
@BrokenCrayons: " In fact, the proof of Intel's ability to chew up the graphics stack is in the fact that NV isn't even releasing GPUs under the 900 series. If I'm not mistaken, the lowest end 900 they offer is the GTX 950 in desktops."
If I recall correctly, while there were plenty of architectural improvements to be had, the lion's share of the gaming performance improvement came from reallocating die area once used for DP (compute oriented) hardware for use by SP (gaming oriented) hardware. Given that the low end 700 series cards didn't have much in the way of DP hardware to begin with, one would not expect much in the way of low end 900 series chips (or rather Maxwell).
That said, you are entirely correct in that the lack of progress on the low end of the discrete card market has allowed AMD and now Intel to release IGPs that make low end discretes seem redundant / irrelevant. Keep in mind, however, that AMD got there by sacrificing half of their "high end" CPU compute units and Intel has a two node process advantage over the current low end discrete chips on the market. We may see low end cards return to relevance with the new process node used for the next gen discrete cards.
Me too. I bought an AMD 7850k last year. With the mobo, SSD and a DVD writer all in a small neat box for under 500 bucks. Sure it runs hot, but it delivers. I get 40-45 fps on Battlefield 4 on medium settings. It's not noisy because it has only one fan whose speed you can adjust. I got so tired of big, heavy boxes who sound like a vacuum cleaner under full load. Not to mention the prices for PSUs, bad ass video cards etc... The folks who still cling to these big boxes live in another world. I have a PS4 and one has to see the performance it delivers from an APU plus a good programming interface ( probably a combination of OpenCL and Mantle proprietary to PS4). I still don't get why so many out there are against parallel computing. It's the future. Look at Apple. They went ahead with the Metal API and it's doing wonders for their tablets and smartphones. This is the way forward. Well integrated, multicore CPUs and GPUs, power efficient, or just SoCs using advanced parallel programming APIs. Those who haven't started yet better get going. Vulkan is coming out soon. That plus OpenCL and SPIR is the way forward.
For 3d rendering, you are better off with multiple slower systems - 3d rendering scales very well even across different systems, so you should just build a rendering farm out of several midrange systems - it will give you far more performance than any single system, and it will have better price/performance too.
For render nodes you can just go with mobo, cpu and ram, you don't really need anything else. You can boot over lan.
I'm interested in seeing the performance gains from the newly designed eDRAM, now that its not used as a victim cache. I'm surprised mobile did not get released first, as it seems apple is awaiting these parts for the MacBook Pro series.
So instead of using a smaller process to make better CPUs, Intel puts the extra space into making a useless iGPU that nobody wants even in enthusiast parts.
The vast majority of the market wants the iGPU because it is cheaper and more power efficient than using an external GPU. You're the odd duck out here.
There aren't magical power gains to be had here just because the GPU is on-die. The power improvements here come from Intel being 2-3 process nodes ahead of the dGPU vendors.
Uh, yes there are. You don't have to transmit the electricity over the PCIe bus, and Intel's manufacturing is much more tuned for lowering the power usage.
Yeah, no. I want my next laptop to have Iris Pro and NO DGPU! I don't game much on mobile and Iris Pro will be more than enough. These chips are extremely exciting to me. My perfect laptop would be a 4+4e cpu + 32GB DDR4, and a 1TB pcie ssd + 2TB spinning disk. C'mon Clevo, lets do it! Also, I want it to be in a sturdy chassis, like a Dell/HP/Lenovo business class laptop. My current Clevo, which is awesome, is all plastic and I am always afraid of breaking it. Plus it has a dgpu that I don't use much, but even when it's powered down it still uses some power.
Is anyone selling full power Skylake mobile (45W, 4 physical cores) chips in a laptop that doesn't include a dGPU? I was looking the other week, and couldn't find anything newer than haswell with full power CPUs unless I also included a dGPU.
I have the Iris Pro Macbook 15". I was surprised that it has to use a lot more TDP than the discreet graphics models, regularly hovering around 99C under CPU/GPU load. Iris Pro is efficient at idle, but not at load, where it falls well behind AMD/Nvidia graphics architectures in efficiency.
Follow the link into the AT Forums thread, there's going to be i5/i7 quads with 4+4e setups that should be pin-compatible with the existing mobile socket. Can't imagine it'll take long after launch for Clevo/Sager to offer them as an option. I'm sure Dell/HP/Lenovo will offer that as a "mobile workstation" as well but you'll pay out the wazoo for that.
I think gt4e will be very nice for gpu compute, especially given the power envelope. Anyway, the GPU doesn't have much to do with my desire for ECC - I want a mobile workstation with plenty of RAM (for compilation, running memory hungry apps in VMs) and I care about reliability. This paper: http://static.googleusercontent.com/media/research... found that while lots of DIMMs don't experience correctable errors, the ones that do can experience a huge amount (thousands per DIMM per year on average). Granted, it's an old paper and it is measuring errors in a HW/SW environment that is likely quite different from what I'd expect to find in a laptop, but I still find it pretty scary.
Your link goes nowhere... but I think I know the research paper you refer to. I even think if I searched enough drives I'd still find it.
But in the end, non-ECC RAM still allowed errors through, and the ECC was nigh on perfect. Depending on your workload, this may, or may not be a significant problem.
I'd rather a laptop with ECC RAM, and I'm willing to pay for it.
Still waiting for Intel to release an i3 with the top configuration iris pro graphics. That would be a hit for HTPC builders and casual gamers. Especially if you are only adding 50-75 dollars on top of the base i3 cpu cost.
I'd keep dreaming. That would complicate Intel's lineup (for my budget, should I get more gpu or more cpu?) and cannibalize sales of expensive skus that are currently the only way to get better graphics.
Not necessarily. Someone doing a budget build that is buying an i3 level processor is not at all likely to splurge on an i7 that is more than twice and maybe 3x the cost to get the intel graphics. They are just going to spend the $50-75 on a dedicated gpu. Adding iris pro to i3 chips may put more money in intels pocket in that case while also taking sales away from dgpu vendors.
I must be missing something, because mobile Core i3/i5/i7 Skylake parts are listed in the referenced Intel document with 'Jan 16' pricing. If I am seeing that correctly, why do you say "Intel is approaching the mobile Xeon market first, rather than the consumer market..."?
I'm guessing Skylake eDRAM consumer desktop CPUs would require new chipsets/motherboards so is unlikely to happen mid-cycle with Skylake and will instead have to wait for Kaby Lake.
In the meantime, I'd be interested if Intel released a faster desktop Broadwell eDRAM CPU as a Devil's Canyon successor. Say 3.8 GHz, full 8MB L3 cache, official DDR3-1866 support, the NGPTIM, and 88W TDP. The Core i7-5775C Broadwell with eDRAM can be just as good as the Core i7-6700K Skylake for gaming despite the lower 65W TDP, so a high-clocked eDRAM Broadwell may well be popular among enthusiasts until eDRAM Kaby Lake's can show up. With Skylake and up only officially being supported in Windows 10 in the long term, a new high-clocked eDRAM Broadwell would also appeal to enthusiasts wanting to maintain a multi-boot Windows 7/8.1/10 system.
"The Core i7-5775C Broadwell with eDRAM can be just as good as the Core i7-6700K Skylake for gaming"
Better. It beats it despite a clock and power consumption deficit.
It's odd that this article didn't mention the gaming performance of the Broadwell C chips, something that makes current desktop Skylake (like 6700K) look rather pointless for a gaming-centric build.
Need ECC, Intel graphics and HDMI 2.0 in mATX. Intel is moving in wrong direction by making Xeons run on only C-series chipsets. For many tasks consumers Z170-series are more abundant and have more features than C-series, they lack ECC thou. C-series is difficult to acquire (in Europe) and most don't have HDMI 2.0 in mATX form.
Why can't Z ja Q-series modos support ECC with purchasable code?
ECC should have been standard on enthusiast platforms for a long time now. The performance hit is minimal in comparison with the benefit.
People like to mock Apple but the Apple Lisa from 1983 came with ECC and every chip was individually registered so that if one failed the board would still work fine without it.
One would think that after over 30 years desktop computing would have progressed to the point of surpassing a 1983 desktop.
As an example, all AMD FX chips and chipsets natively supports ECC DRAM, the problem lies in MoBo BIOSes, in fact lies in MoBo's manufacture laziness to implement it or not (not many boards support it). Intel was always good at cutting down costs and they know, or were good at convincing the market, that ECC DRAM aren't needed on consumer machines and uses ECC only on "Xeon platforms"
Confusing because of wording and graph. The difference between 2.8-2.9 is $190, not $55-$56. I know what you are trying to say. Just pointing out how it may be taken wrong.
As Sweepr points out, the difference between the 2.8-2.9 GHz parts is only $55-56. That is for both the increase in graphics EUs (24 to 72) as well as that extra on-package eDRAM.
And who the hell is going to buy the 1575 @ $1207 over 1545 @ $679. That's a lot of money for 100Mhz.
Wow so these are not competition for discrete cards. Intel appears to want to charge an arm and a leg and for some reason NOT compete at all on desktop with these. Does production yields suck or something? What reason would you not release every desktop chip with this cache if it was easy to make? Most of us actually gaming turn the thing off because they're so weak on desktop chips (myself included on my devils canyon). There must be some issue with production or something, I can't think of a good reason to NOT sell a desktop version for an extra $50 or 100.
That said, they still have MANY miles to go before I'd even consider this junk vs. discrete as a gamer (like 7nm, with HBM2 on package or something?). 720p, etc are not resolutions I want to play at, and I like things in ANY game I play MAXED or I wait for a better discrete card purchase (like now, waiting on 14/16nm, gaming on GOG stuff until then & still having a blast)...LOL.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
72 Comments
Back to Article
Gothmoth - Tuesday, January 26, 2016 - link
it´s all so boring..... i wait for 8 or 12 core desktop systems that can be overlocked and don´t run at 140watt.beside my day job i am a freelance 3d artist and i need all the computing power i can get.
my 5280k gest a bit long in the thooth.. well it coudl be faster from the start.
yet intels makes baby steps when it comes to computing power.
for me they could get rid of the GPU part and spend that die size to give me a faster CPU.
darckhart - Tuesday, January 26, 2016 - link
I hear that. I have a 6-core on X58 and ... nope nothing interesting for me yet. any new 6-core is on a dying socket, or half-step behind. no sense in "upgrading" to a 4-core with "a much better but still sucky compared to discrete graphics" part. and now that intel wants to lock xeon to server chipsets.... meh.Ammaross - Tuesday, January 26, 2016 - link
Ditto here. My next upgrade will be a 6 or 8-core. These 4-core i7 chips (even OCed to 4.4Ghz) sit at 100% way too much for my needs.willis936 - Tuesday, January 26, 2016 - link
You're asking for the impossible. I'm fine with the power bill of a single machine if it gives me king of the hill single threaded performance, 4 very fast memory channels, lots of PCIe lanes to play with, and lots of cores for multithreaded applications. I have a 48 thread system but it struggles to host a modded MC server because 2.7 GHz IVB doesn't cut it for single threaded performance.ImSpartacus - Tuesday, January 26, 2016 - link
Impossible? Idk, wasn't cannonlake-s rumored to bring 8 cores to the 95W lineup? 10nm seems like a good time for a gun-shy Intel to finally bump core count.And Intel is already increasing core count on the Broadwell-E to 10, right? That would queue up Skylake-E to hypothetically debut 12 cores at around the same time as the release of cannonlake-s.
BillyONeal - Wednesday, January 27, 2016 - link
I doubt that you're going to see 50+ % more cores at the same frequency in the same power envelope. Node shrinks decrease power use, but not by *that* much.beginner99 - Wednesday, January 27, 2016 - link
If at all intel will up cores maybe 6 but sure not to eight in cannonlake. I believe it when I see it. Fact is most users are fine with dual-cores. If people needed processing power there would not be a tablet hype. And even the worst big-core celeron parts run circles around tablet CPUs.Broadwell-E 10 Core will as leaks say slot in above the existing 8-core at $1500. So nothing changes at all.
Samus - Wednesday, January 27, 2016 - link
Far from impossible. If AMD can make an 8-core 4GHz desktop CPU that uses 200w, Intel should be able to do the same at 100w. They're just lazy. Because they can be.ddriver - Friday, January 29, 2016 - link
Everyone can be lazy, not everyone can afford it though.icrf - Tuesday, January 26, 2016 - link
I've always been a bit disappointed in the huge die area going to a GPU that I have zero intention of ever using, but I also know I'm not an average consumer. I guess that's what the enthusiast -E lines are for. They don't have an iGPU and they have more CPU cores. If you want even more cores, there's a pile of Xeons for the right price.Do I wish they'd offer me more cores for less money? Of course. There just aren't enough people like me to make Intel offer such a thing. They'd just see it as cannibalizing high-margin Xeon sales.
nils_ - Wednesday, January 27, 2016 - link
If it weren't for NVidias atrocious linux drivers (although the Intel drivers for Skylake also suck big time) I'd be using my dGPU for everything. As such I'm one of the few people who actually use the iGPU in a high end skylake...rtho782 - Wednesday, January 27, 2016 - link
I thought the nvidia binary drivers were supposed to be very good for linux?bug77 - Wednesday, January 27, 2016 - link
They are. I'm an intel+nvidia linux setup right now and it works just fine. Dual monitor, too.icrf - Wednesday, January 27, 2016 - link
My understanding is the open source nvidia drivers suck, but the proprietary binaries are good.BurntMyBacon - Thursday, January 28, 2016 - link
@icrf: "My understanding is the open source nvidia drivers suck, but the proprietary binaries are good."That has been my experience (with the notable exception of the 900 series being a bit buggy).
BrokenCrayons - Wednesday, January 27, 2016 - link
My experience with NV's Linux drivers has been utterly free of problems. I'd happy put them into the "it just works" category.nils_ - Wednesday, January 27, 2016 - link
They do have problems with the GTX 900 series and tend to pretty much break with every new kernel release. Also they won't work with Wayland.BurntMyBacon - Thursday, January 28, 2016 - link
@nils_: "They do have problems with the GTX 900 series ..."I've noticed this as well. They are workable, but hopefully they get this straightened out soon. It is a spec of dirt on an otherwise (relatively) clean record for their binary drivers.
@nils_: "... and tend to pretty much break with every new kernel release."
This is a function of the drivers being closed source binary drivers. nVidia has to recompile the driver for you every time the kernel is updated.
nils_ - Wednesday, February 3, 2016 - link
Yeah, the drivers being closed is part of the problem. But as it stands I can use the iGPU fine for Linux work and dual boot into Windows using the NVidia card for gaming. Its probably more energy efficient as well, if only there was a way to complete disable the Nvidia card in Linux...Fallen Kell - Wednesday, January 27, 2016 - link
The binary drivers are the best for linux. It is why you don't see a single AMD GPU in any SteamMachine linux system. The only people who think they have atrocious linux drivers are the people using the open source driver or expecting the same performance as on Windows (it won't be until more people use linux).BurntMyBacon - Thursday, January 28, 2016 - link
@Fallen Kell: "The binary drivers are the best for linux."I'd edit that to the best for Linux "gaming". If you are looking for a driver that doesn't make you update with every kernel update, then the binaries aren't for you.
@Fallen Kell: "It is why you don't see a single AMD GPU in any SteamMachine linux system."
Pretty much it.
nVidia binary driver >> ATi binary driver > ATi open source driver >> nVidia open source driver
Where ATi's Open Source Driver is approaching the quality of their proprietary binary driver, nVidia has chosen not to put much support into the open source driver and create a higher quality binary driver.
beginner99 - Wednesday, January 27, 2016 - link
The problem with iGPUs is that they arent really integrated. To make use of them outside of graphics you need the software programmed for this and also need to install according drivers/runtimes (OpenCl for example). It's a mess and relying on software to do this is a crappy idea. The chips itself should be able to assign tasks to the GPU. Hell why not create a new instruction for this? (so I'm doubtful that actually works). Still my point holds, CPU and iGPU must be further integrated.icrf - Wednesday, January 27, 2016 - link
What would the difference be between having additional instructions that software has to implement and having software written in OpenCL? For that matter, OpenCL code can run happily on a CPU, too, so that means all code should just be OepnCL, right?The thing is GPUs are good at running one kind of code: simple massively parallel tasks. The thing is most code isn't like that because most problems aren't like that. Branchy single threaded code can't make any use of a GPU, and that's what most code is. Nine women can't make one baby in a month.
BurntMyBacon - Thursday, January 28, 2016 - link
@icrf: "What would the difference be between having additional instructions that software has to implement and having software written in OpenCL?"I think you missed his point. He is saying that OpenCL (and DirectCompute et. al) is a software + driver/runtime solution. Rather than have a solution where code is interpreted through a software layer/driver, he'd rather see a GPU ISA for direct use of instruction on the GPU.
Their is a performance advantage to be had when you remove all those extra cycles interpreting commands. The code would also be much more predictable in the sense that you would know exactly what is being executed beforehand.
On the flip side, an interpreter affords the hardware manufacturer more flexibility in implementation. Also, given the wildly differing compute implementations on the market, it would be very difficult to make an ISA that could be widely used. (Don't forget that this ISA could very easily be used on GPUs as a coprocessor addon.) You would be in the difficult position of choosing which architecture to force everyone else to conform to. If you go with the most widely used, then you force nVidia and ATi to scale back to match Intel's IGP compute capability. If you choose the most widely used discrete option, then you loose out on the double precision capabilities present in nVidia's upper end Kepler and Fermi architectures as well as a huge chunk of ATi's GCN capabilities. Maxwell (900 series) removed a lot of double precision compute capability to get more gaming relevant single precision capability. If you decide to use the most mature architecture at this point, then nVidia and Intel are forced to make potentially huge changes to their architecture to support a metric crap ton (OK, I'm exaggerating, but they still wouldn't like it) of HSA / GCN capabilities that may or may not get used in practice.
SkipPerk - Friday, February 5, 2016 - link
"Nine women can't make one baby in a month"No, but one man can. If you have any leftover you can use it as thermal paste.
alysdexia - Sunday, December 16, 2018 - link
wrongBurntMyBacon - Thursday, January 28, 2016 - link
@beginner99: "The problem with iGPUs is that they arent really integrated. ... The chips itself should be able to assign tasks to the GPU. Hell why not create a new instruction for this? ... CPU and iGPU must be further integrated."Sounds exactly like AMD when they first started talking fusion. You might look into HSA and its current implementation in Carrizo. Still more work to do, but the bigger issue is just getting developers to program for it.
patrickjp93 - Wednesday, February 3, 2016 - link
Still requires you go through drivers and HSAIL. Intel's on the right path with OpenMP.BrokenCrayons - Wednesday, January 27, 2016 - link
I'm in the opposite camp as you. I dislike feeling compelled to add an entire extra PCB just for graphics processing that I need only infrequently to get maybe the missing 25% of performance that iGPUs don't currently offer. I'd much prefer Intel leaving core count at 2-4 low wattage cores paired up with one of their 72EU + eDRAM GPUs in a sub-15 watt package. Discrete cards were fine a decade ago, but Intel's been doing fantastic at bringing integrated graphics up in performance since the GMA 950 was released (that thing was quite impressive in its day and I was thrilled to kick the GeForce HairDryer GT out of my desktop after upgrading to an early C2D chip and mommyboard). In fact, the proof of Intel's ability to chew up the graphics stack is in the fact that NV isn't even releasing GPUs under the 900 series. If I'm not mistaken, the lowest end 900 they offer is the GTX 950 in desktops.I personally would love to see Intel put more die space into graphics so I can finally get a reasonably sized PC that doesn't need to be bothered with expansion slots that are stuffed with absurdly huge graphics cards festooned with idiotically dressed women with deluded body proportions and sucks up 150+ watts of power that is mostly turned into waste heat rather than accomplishing something useful.
BurntMyBacon - Thursday, January 28, 2016 - link
@BrokenCrayons: " In fact, the proof of Intel's ability to chew up the graphics stack is in the fact that NV isn't even releasing GPUs under the 900 series. If I'm not mistaken, the lowest end 900 they offer is the GTX 950 in desktops."If I recall correctly, while there were plenty of architectural improvements to be had, the lion's share of the gaming performance improvement came from reallocating die area once used for DP (compute oriented) hardware for use by SP (gaming oriented) hardware. Given that the low end 700 series cards didn't have much in the way of DP hardware to begin with, one would not expect much in the way of low end 900 series chips (or rather Maxwell).
That said, you are entirely correct in that the lack of progress on the low end of the discrete card market has allowed AMD and now Intel to release IGPs that make low end discretes seem redundant / irrelevant. Keep in mind, however, that AMD got there by sacrificing half of their "high end" CPU compute units and Intel has a two node process advantage over the current low end discrete chips on the market. We may see low end cards return to relevance with the new process node used for the next gen discrete cards.
valentin-835 - Thursday, January 28, 2016 - link
Me too. I bought an AMD 7850k last year. With the mobo, SSD and a DVD writer all in a small neat box for under 500 bucks. Sure it runs hot, but it delivers. I get 40-45 fps on Battlefield 4 on medium settings. It's not noisy because it has only one fan whose speed you can adjust. I got so tired of big, heavy boxes who sound like a vacuum cleaner under full load. Not to mention the prices for PSUs, bad ass video cards etc...The folks who still cling to these big boxes live in another world. I have a PS4 and one has to see the performance it delivers from an APU plus a good programming interface ( probably a combination of OpenCL and Mantle proprietary to PS4).
I still don't get why so many out there are against parallel computing. It's the future. Look at Apple. They went ahead with the Metal API and it's doing wonders for their tablets and smartphones. This is the way forward. Well integrated, multicore CPUs and GPUs, power efficient, or just SoCs using advanced parallel programming APIs.
Those who haven't started yet better get going. Vulkan is coming out soon. That plus OpenCL and SPIR is the way forward.
sna1970 - Thursday, January 28, 2016 - link
you are kidding right ?we already have up to 18 cores Xeon .. your problem is not the chip , your problem is BUDGET.
ddriver - Friday, January 29, 2016 - link
For 3d rendering, you are better off with multiple slower systems - 3d rendering scales very well even across different systems, so you should just build a rendering farm out of several midrange systems - it will give you far more performance than any single system, and it will have better price/performance too.For render nodes you can just go with mobo, cpu and ram, you don't really need anything else. You can boot over lan.
jasonelmore - Tuesday, January 26, 2016 - link
I'm interested in seeing the performance gains from the newly designed eDRAM, now that its not used as a victim cache. I'm surprised mobile did not get released first, as it seems apple is awaiting these parts for the MacBook Pro series.nathanddrews - Tuesday, January 26, 2016 - link
These are mobile parts, mobile Xeon.jasonelmore - Wednesday, January 27, 2016 - link
nobody really buys mobile xeons in bulk. Apple sure doesntnathanddrews - Wednesday, January 27, 2016 - link
The article made that clear as well. You can get one from HP.fanofanand - Tuesday, January 26, 2016 - link
"The fact that Intel is approaching the mobile Xeon market first, rather than the consumer market as in Haswell, should be noted."Sounds like that's precisely what they are doing, is going mobile first.
Flunk - Tuesday, January 26, 2016 - link
Nothing is stopping them from releasing MacBook Pros using Xeon chips. They do claim they're workstations after all.baobrain - Tuesday, January 26, 2016 - link
So instead of using a smaller process to make better CPUs, Intel puts the extra space into making a useless iGPU that nobody wants even in enthusiast parts.nandnandnand - Tuesday, January 26, 2016 - link
Only enthusiasts want more than 4 cores. The majority of the market will use iGPU on laptops.michael2k - Tuesday, January 26, 2016 - link
The vast majority of the market wants the iGPU because it is cheaper and more power efficient than using an external GPU. You're the odd duck out here.BillyONeal - Wednesday, January 27, 2016 - link
There aren't magical power gains to be had here just because the GPU is on-die. The power improvements here come from Intel being 2-3 process nodes ahead of the dGPU vendors.patrickjp93 - Wednesday, February 3, 2016 - link
Uh, yes there are. You don't have to transmit the electricity over the PCIe bus, and Intel's manufacturing is much more tuned for lowering the power usage.extide - Tuesday, January 26, 2016 - link
Yeah, no. I want my next laptop to have Iris Pro and NO DGPU! I don't game much on mobile and Iris Pro will be more than enough. These chips are extremely exciting to me. My perfect laptop would be a 4+4e cpu + 32GB DDR4, and a 1TB pcie ssd + 2TB spinning disk. C'mon Clevo, lets do it! Also, I want it to be in a sturdy chassis, like a Dell/HP/Lenovo business class laptop. My current Clevo, which is awesome, is all plastic and I am always afraid of breaking it. Plus it has a dgpu that I don't use much, but even when it's powered down it still uses some power.DanNeely - Tuesday, January 26, 2016 - link
Is anyone selling full power Skylake mobile (45W, 4 physical cores) chips in a laptop that doesn't include a dGPU? I was looking the other week, and couldn't find anything newer than haswell with full power CPUs unless I also included a dGPU.boogerlad - Tuesday, January 26, 2016 - link
The NP3652 is the only one.tipoo - Tuesday, January 26, 2016 - link
I have the Iris Pro Macbook 15". I was surprised that it has to use a lot more TDP than the discreet graphics models, regularly hovering around 99C under CPU/GPU load. Iris Pro is efficient at idle, but not at load, where it falls well behind AMD/Nvidia graphics architectures in efficiency.BillyONeal - Wednesday, January 27, 2016 - link
discrete, not discreet :) (Unless you mean a GPU that doesn't tell secrets ;))alysdexia - Sunday, December 16, 2018 - link
That's temperature, not power.Anonymous Blowhard - Wednesday, January 27, 2016 - link
Follow the link into the AT Forums thread, there's going to be i5/i7 quads with 4+4e setups that should be pin-compatible with the existing mobile socket. Can't imagine it'll take long after launch for Clevo/Sager to offer them as an option. I'm sure Dell/HP/Lenovo will offer that as a "mobile workstation" as well but you'll pay out the wazoo for that.Gazzy - Tuesday, January 26, 2016 - link
I wonder if those will make to Mac Book pro 15 line.psurge - Tuesday, January 26, 2016 - link
I hope so. Would be nice to see it coupled with ECC memory as well.tipoo - Tuesday, January 26, 2016 - link
Pretty unlikely. Heck lots of laptop Firepros don't even have ECC, and those are meant for pro use. The Iris, not so much.psurge - Wednesday, January 27, 2016 - link
I think gt4e will be very nice for gpu compute, especially given the power envelope. Anyway, the GPU doesn't have much to do with my desire for ECC - I want a mobile workstation with plenty of RAM (for compilation, running memory hungry apps in VMs) and I care about reliability. This paper: http://static.googleusercontent.com/media/research... found that while lots of DIMMs don't experience correctable errors, the ones that do can experience a huge amount (thousands per DIMM per year on average). Granted, it's an old paper and it is measuring errors in a HW/SW environment that is likely quite different from what I'd expect to find in a laptop, but I still find it pretty scary.nils_ - Wednesday, January 27, 2016 - link
Yeah it seems the problem is way overblown, otherwise there wouldn't be any non ECC parts.Notmyusualid - Wednesday, January 27, 2016 - link
Your link goes nowhere... but I think I know the research paper you refer to. I even think if I searched enough drives I'd still find it.But in the end, non-ECC RAM still allowed errors through, and the ECC was nigh on perfect. Depending on your workload, this may, or may not be a significant problem.
I'd rather a laptop with ECC RAM, and I'm willing to pay for it.
psurge - Wednesday, January 27, 2016 - link
Sorry: http://static.googleusercontent.com/media/research...I'm willing to pay extra for it as well.
vcsg01 - Tuesday, January 26, 2016 - link
Still waiting for Intel to release an i3 with the top configuration iris pro graphics. That would be a hit for HTPC builders and casual gamers. Especially if you are only adding 50-75 dollars on top of the base i3 cpu cost.ImSpartacus - Tuesday, January 26, 2016 - link
I'd keep dreaming. That would complicate Intel's lineup (for my budget, should I get more gpu or more cpu?) and cannibalize sales of expensive skus that are currently the only way to get better graphics.Intel is very good at making money.
vcsg01 - Wednesday, January 27, 2016 - link
Not necessarily. Someone doing a budget build that is buying an i3 level processor is not at all likely to splurge on an i7 that is more than twice and maybe 3x the cost to get the intel graphics. They are just going to spend the $50-75 on a dedicated gpu. Adding iris pro to i3 chips may put more money in intels pocket in that case while also taking sales away from dgpu vendors.jsntech - Tuesday, January 26, 2016 - link
I must be missing something, because mobile Core i3/i5/i7 Skylake parts are listed in the referenced Intel document with 'Jan 16' pricing. If I am seeing that correctly, why do you say "Intel is approaching the mobile Xeon market first, rather than the consumer market..."?vcsg01 - Tuesday, January 26, 2016 - link
He's talking about processors with iris pro 580 graphics(gt4e)jsntech - Wednesday, January 27, 2016 - link
Thanks. Guess I got confused by Sweeper's referenced post, where he says:"Now the Iris Pro 580 Core i5/i7 lineup:
i7-6970HQ...
i7-6870HQ...
i7-6770HQ...
i5-6350HQ..."
ltcommanderdata - Wednesday, January 27, 2016 - link
I'm guessing Skylake eDRAM consumer desktop CPUs would require new chipsets/motherboards so is unlikely to happen mid-cycle with Skylake and will instead have to wait for Kaby Lake.In the meantime, I'd be interested if Intel released a faster desktop Broadwell eDRAM CPU as a Devil's Canyon successor. Say 3.8 GHz, full 8MB L3 cache, official DDR3-1866 support, the NGPTIM, and 88W TDP. The Core i7-5775C Broadwell with eDRAM can be just as good as the Core i7-6700K Skylake for gaming despite the lower 65W TDP, so a high-clocked eDRAM Broadwell may well be popular among enthusiasts until eDRAM Kaby Lake's can show up. With Skylake and up only officially being supported in Windows 10 in the long term, a new high-clocked eDRAM Broadwell would also appeal to enthusiasts wanting to maintain a multi-boot Windows 7/8.1/10 system.
Oxford Guy - Wednesday, January 27, 2016 - link
"The Core i7-5775C Broadwell with eDRAM can be just as good as the Core i7-6700K Skylake for gaming"Better. It beats it despite a clock and power consumption deficit.
It's odd that this article didn't mention the gaming performance of the Broadwell C chips, something that makes current desktop Skylake (like 6700K) look rather pointless for a gaming-centric build.
Anato - Wednesday, January 27, 2016 - link
Need ECC, Intel graphics and HDMI 2.0 in mATX. Intel is moving in wrong direction by making Xeons run on only C-series chipsets. For many tasks consumers Z170-series are more abundant and have more features than C-series, they lack ECC thou. C-series is difficult to acquire (in Europe) and most don't have HDMI 2.0 in mATX form.Why can't Z ja Q-series modos support ECC with purchasable code?
Oxford Guy - Wednesday, January 27, 2016 - link
ECC should have been standard on enthusiast platforms for a long time now. The performance hit is minimal in comparison with the benefit.People like to mock Apple but the Apple Lisa from 1983 came with ECC and every chip was individually registered so that if one failed the board would still work fine without it.
One would think that after over 30 years desktop computing would have progressed to the point of surpassing a 1983 desktop.
sfuzzz - Thursday, January 28, 2016 - link
As an example, all AMD FX chips and chipsets natively supports ECC DRAM, the problem lies in MoBo BIOSes, in fact lies in MoBo's manufacture laziness to implement it or not (not many boards support it). Intel was always good at cutting down costs and they know, or were good at convincing the market, that ECC DRAM aren't needed on consumer machines and uses ECC only on "Xeon platforms"Dug - Wednesday, January 27, 2016 - link
Confusing because of wording and graph. The difference between 2.8-2.9 is $190, not $55-$56.I know what you are trying to say. Just pointing out how it may be taken wrong.
As Sweepr points out, the difference between the 2.8-2.9 GHz parts is only $55-56. That is for both the increase in graphics EUs (24 to 72) as well as that extra on-package eDRAM.
And who the hell is going to buy the 1575 @ $1207 over 1545 @ $679. That's a lot of money for 100Mhz.
milkod2001 - Thursday, January 28, 2016 - link
There is a hope that Dell with throw those mobile Xeons with GTe4 into XPS15 chasis and create ultimate sexy mobile workstation.TheJian - Saturday, February 6, 2016 - link
Wow so these are not competition for discrete cards. Intel appears to want to charge an arm and a leg and for some reason NOT compete at all on desktop with these. Does production yields suck or something? What reason would you not release every desktop chip with this cache if it was easy to make? Most of us actually gaming turn the thing off because they're so weak on desktop chips (myself included on my devils canyon). There must be some issue with production or something, I can't think of a good reason to NOT sell a desktop version for an extra $50 or 100.That said, they still have MANY miles to go before I'd even consider this junk vs. discrete as a gamer (like 7nm, with HBM2 on package or something?). 720p, etc are not resolutions I want to play at, and I like things in ANY game I play MAXED or I wait for a better discrete card purchase (like now, waiting on 14/16nm, gaming on GOG stuff until then & still having a blast)...LOL.