Typically you'll be using the dGPU for video decoding since it's closer to the display pipeline. However you can totally use QuickSync for video encoding, even with a dGPU.
Ah yes, QuickSync in particular was a question for me. While NVENC certainly does do a fine job, if I have a hardware encoder laying dormant in the CPU, it might as well do stream encoding for me :)
I just messed about with NVENC, QSVEncC and x265 when ripping som DVDs. X265 still gives the best quality and size. With a i5-6500, the encoding speed wasn't all that, at around 65 fps. Of course, QSVEncC was closer to 200 fps and NVENC (GTX 1070) clocked in at 1300-2000 FPS.
Quality and size of the file are of course the opposite, with x265 looking the best and being the smallest, then QSVEncC and finally NVENC.
Can you? Last I looked, that required enabling both the dGPU and iGPU simultaneously (and simply not plugging a monitor into the iGPU). Attempts to enable the iGPU while having a dGPU plugged in on my Ivy Bridge resulted in Windows not booting.
I can't speak for your system, but my Z77 motherboard features Virtu multi-GPU support that allows me to use Quick Sync while having my monitor plugged into my dGPU. You have to activate both IGP and dGPU in BIOS, then load both drivers. It worked for me under W7 and W10.
Errm, you've got dedicated hardware specifically for the purpose of supporting multiple GPUs (the Lucid Virtu), so that's not really a typical example.
Last I checked, it requires motherboard support. You can't just install some software and expect it to work. That's what they meant by dedicated hardware.
yep, there is a chip that enable the virtu stuff. It is little more than a soft-switch to route traffic to the right chip, but still required for the software to work.
yep, virtu never worked on my SB z68 motherboard, but I upgraded to a z77 board (same SB CPU) to support TRIM over RAID0/1 for my SSDs and was happy to find that Virtu worked as advertised on the newer board. Used it to rip DVDs and BluRays for a few years, but more recently moved to a newer dGPU as I re-ripped my collection to h.256 to save on server space.
It is typically a BIOS option to enable/disable the iGPU in the presence of a discrete GPU. Enabling it should have no ramifications, though, and the iGPU should simply show up as another video adapter on the system (no different than if you plug in an AMD and NVIDIA card at the same time). I've done this before on my machines and I've never had windows fail to boot - what configuration do[/did] you have? Perhaps the discrete GPU driver attempted to configure the system as a hybrid configuration (e.g. like on laptops) but it was not compatible for some reason?
It was the bios setting that I attempted to enable. It's an i7-3770 on a Z77 motherboard and an nVidia GPU (670 at the time) on what was originally Windows 7. Windows 8 didn't help, and I've not tried it with my current GPU (970) or OS (Win10).
hmm odd. I haven't ever tried on Windows 7 (or ivy bridge, for that matter), but my haswell works flawlessly alongside my GTX 780Ti in windows 10. I would suspect it could just be a driver issue.
Ivy still required lucid to work (my wife's desktop does not have it, so I can pick one or the other). But newer boards came with the feature as standard. Still, enabling the onboard graphics should not bring instability. Must be a bad iGPU or a driver issue at play there. enabling the iGPU should just turn off the dGPU in systems without virtu.
To be fair, a 12% performance boost on mobile is pretty fair, considering it's higher than some new generations have managed with new uArchs or entirely new process nodes.
Can someone please explain to me Intel's product introduction strategy? Why would they want to sell the lower-cost non-Iris Pro Kaby Lake chips first? This makes no sense...essentially diluting the demand for the "better" chips by putting the "lesser" chips on the markets MONTHS before hand.
I understand that PC makers love these "bread and butter" = "ho-hum" SKU's, and can't wait to sell "Kaby Lake" versions of products they are selling right now, but WOW! One would think that Intel would at least attempt to improve profitability by making the "better" (Iris PRO) chips available at the same time. I just don't see the logic...
They speculate on page 4 whether some retooling is required for the new 14nm+ process, and therefore whether perhaps only one or two fabs are going to be up and running early. If Intel has limited output it makes sense to direct early production to the valuable CPUs per mm2 of wafer... which is precisely these standard U and Y series processors (maybe some Xeon CPUs are higher earners, but the platform isn't ready yet). Mobile Iris Pro CPUs and most desktop processors require much more die area... meaning less output.
All speculation at this point, but it is a possible answer to your question.
Ok, that makes sense. I always thought they were the same chips - with the Iris Pro features disabled. But if they are smaller dies then the bottom up approach could help to perfect the process before switching to the larger dies - potentially reducing the number of defective chips. Thanks.
Smaller chips yield dramatically better when defects are high. Imagine a die that holds 100 large chips and there are 100 defects on the die. Some of the chips will have more than one defect so there will be a few chips that are good, perhaps 15-25 or so. Now imagine that you are putting 200 smaller chips on the same die with 100 defects. You'll get at least 100 good chips, perhaps 110-120. So unless you can sell the large chip for 6-8x the cost of the small chip, it's more profitable to start with the small chips when defect rates are high.
The answer to almost any question like that is - they think it will be more profitable for them. They arent just thinking about the latest fastest thing, they are thinking about production, orders, volume and stock levels.
Intel has zero competition in the high-end CPU front. People who can't wait will pay just as much for last-gen chips because that's all that's on the market. People who can wait won't mind a few months (and don't really have an option). In contrast, Intel lives in fear of Qualcomm, Samsung, or AMD announcing an ARM chip competitive with x86. Taking a more aggressive stance and coming to market as soon as possible is what Intel shareholders will want to see.
True story. I can cry all I want about wanting a faster desktop chip, but the simple fact of the matter is that I will be forced to wait for Intel to release one because I am not tempted to move to AMD any time soon. But that the same time there are hundreds of schools debating between ARM and Intel chromebooks and chromeboxes, and whoever offers the lowest price is going to win the day. Releasing the smaller cheaper chips ASAP will prevent loosing those sales to ARM.
Only problem with your theory is these chips are priced at well above the cost of a Chromebook processor. We are talking $2-400 for these chips. Arm processors can be less than $50. Not even the same league.
Intel has ceded the low end of the market to Arm with the discontinuation of atom.
It is interesting that the desktop cpu will be released at the same time as zen. Maybe they think zen might be competitive and are waiting to set the final clocks once zen performance is known?
More likely Intel has been afraid to release anything that would put AMD out of business entirely... but at the same time they are not about to let them have a foothold on the market ever again.
As you can see, Intel didn't meet its forecast of the AnandTech graph. They were quite close in H2'14, but then the yield learning slowed down considerably. In the image of AT, they forecasted parity in H1'15, but in their latest graph (mine is from november '15), they forecasted mid-2016 for yield parity compared to 22nm. My guess from what I've heard from them is that this is fairly accurate. Yields should be pretty healthy by now.
As always, but I would consider the lack of built-in USB 3.1 a stunning failure. KL was under design when the 3.1 standard was finalized. Mark my words, but this time next year every single phone tablet is going to be using Type-C. Though you can use the port without the 3.1 spec it's a stunning failure on Intel's part to not integrate it when they are a key member of the USB forum.
Personally I think they are doing it because they try to push people to thunderbird but it makes their default product a failure that needs another chip to provide what will be default functionality in 12 months.
They could have supported USB3.1, but it is a bandwidth hungry monster that takes up die space. Instead they support pcie3x4 and oems can get/pay for the extra USB3.1 chip out of their own moneys. USB-c on the other hand can be hung of pretty much anything because it is just a connector.
The only way you can expect lower prices/better chips on desktop (i3 becoming 4c/8t, i5 8c/16t, i7 12c/24t and the no gpu chip with extra cores/cache desired by pc enthusiasts (lets name it i8 and give it 18c/36t and as much extra l3 cache you can fit on it with the space gained by the removal of the gpu... you could even have i6 (14c/28t) and i4 (6c/12t) chips using the same chips with defects on some of the cores/l3 and an i2 with 3c/6t or 4c/8t but with an extremely high clock rate (5GHz+ base)), no more random removal of some instructions (K class missing some of the virtualization for example), a turbo only limited by actual heat and power, mandatory ecc, 4 channel memory (which means that the high end now gets 8 channels), extra pcie lanes, unlocked multipliers and so on) is if Zen is actually faster than AMD marketing claims.
So if you are a religious person start praying....
Given the new 14nm+ process, is it safe to speculate that the original problem with 14nm that delayed the hell out of Broadwell and outright killed some of the desktop versions of Broadwell was a power leakage problem?
Not for this. I keep looking at that die photo and wondering how hard it would be to replace some of that silly graphics bit with 2-4 cores. Maybe they will do it once zen ships. Maybe it would generate too much heat and won't work (I suspect the big multicore jobs have more cache/core area than these chips have (GPU+cache)/core). Maybe Intel will someday include a pony.
They kind of do that. They call it the extreme version and charge more for it. Graphics nodes tend to be more forgiving of flaws due to redundancy. AMD and NVidia usually disable a broken SM or two per chip for yields. It would be interesting to know if Intel does the same.
Are you talking about single thread IPC or total chip? I think that single thread IPC improvements are going to (or already have) become too expensive for most applications and without programmer/compiler help. Things that vectorize well are probably the last area where large improvements are realistic. But this will either require great programmers that can actually utilize current (AVX2) and future SIMD instructions in their code + higher development costs or dropping support for older cpus (neither sounds good). Per chip IPC is probably easier but you still need good programmers/compilers or the use of multiple expensive applications at the same time (why not play a game while encoding 3 videos.... with enough cores/threads/cache/memory bw/io b this would work) .
Clocks could still be increased if you are willing to accept high power consumption and expensive cooling.
However unless Zen is an extreme success Intel has no reason to do this since slow and expensive increases are more profitable and they have no reason to do this.
As much as I dislike Apple's ways, sometimes they do things for a reason.
1) Apple seems very hesitant to bother with Skylake because the various problems/bugs associated with the new architecture. It seams that Haswell/Broadwell is doing the job good enough for them and the "new features" aren't worth it in their general assessment. Macbooks are more media consumption/creation-centric and they're probably waiting on the new fix-function features.
2) Cost and profitability. If the above is true, it makes sense to stay with Haswell/Bradwell to maximize profit. Just like how they're using 3 gen old AMD graphics.
3) Lower than expected demand? Not so sure, but possible.
4) And I'm being hopeful here: *Zen* (and future HBM APUs). Keller has a history working "with" Apple, and they actually like his designs which play well with their OS(s). I'm being hopeful because Apple's marketing prowess and branding may be the beacon AMD (and the competitive market) needs to unleash the new platform and drive Intel to a corner forcing them to lower prices.
New MacBook Pros are expected in October, though. Those would be Skylake unless Apple has plans to use the 4.5W chips TDP upped to 7W, which I highly doubt (maybe in an updated MacBook Air, but I doubt that the Air will get an update).
According to Paul Thurrott, Microsoft was not at all happy with the general bugginess of Skylake.
>while Intel has never formally confirmed this, my sources at Microsoft and elsewhere have told me that Skylake, the original “tock” release following Broadwell, was among the buggiest of chipsets that Intel has ever released. Problems with Skylake are at the heart of most of the issues that Microsoft has seen with its Surface Pro 4 and Surface Book devices, and it’s fair to say that the software giant now regrets delivering the very first Skylake-based devices into the market in late 2015.
As much as I dislike Apple's ways, sometimes they do things for a reason. Not sometimes, but EVERYTHING. With Apple, if you dont understand why they did something, it is highly likely you overlook rather then something they skip over.
Everyone does everything for a "reason". Most times than not, I don't like those reasons. I should have clarified; sometimes they do things for a *good* reason. Keyword here is "sometimes", IMO.
Interesting that it says that 4k resolution support is new on Skylake and also requires additional cooling, considering I have a tablet from Cube in China that is a Skylake Core M, fanless, and supports 4k over USB-C/DisplayPort Alternate Mode. Though then again, based on what I'm looking at in the UEFI settings, it doesn't appear like Cube followed any of Intel's guidelines either.
I'm a little confused by the below, can you clarify:
One of the disappointing aspects from Skylake that has still not been addressed in Kaby Lake-U/Y is the absence of a native HDMI 2.0 port with HDCP 2.2 support
My understanding was that retail solution releases of Kaby lake (e.g. the forthcoming Kaby Lake NUC) would have HDMI 2.0 (not 1.4) support natively. Do you mean that the only way to gain HDMI 2.0 support is to use a converted on the DP 1.2 port? If so, I thought current manufacturers of converters are limited in color bit depth and can't do true 4k 60hz at full color depth.
You'll probably have to wait for the next chip. It's too bad there's no native 2.0 and no DP 1.3/1.4 like AMD and NVIDIA have in their new GPUs. I still haven't found a DP to HDMI 2.0 adapter that works well for 4K 4:4:4 and adding $25-50 for an adapter to the cost of the chip is stupid. /rant
Wonder if they'll have on-board ThunderBolt 3. If so, eschewing the on-board HDMI port and getting a thunderbolt to multi-display USB 2.0a adapter would suffice (if such adapters are yet on the market). I guess we'll have to see what port options exist on the upcoming NUC's (since I'm looking to upgrade my first gen NUC for something fully 4K ready).
The additional hardware that is mentioned would be on the motherboard so it is an integrated HDMI 2 port from the end users perspective of not having to use an external dongle/adapter but it isn't from the manufacturer's/BOM (bill of material) perspective as they have to add an extra chip to support it instead of wiring it directly to the chip.
Well they could just drop hdcp and tell the MAFIA to get lost since they want the best experience for there customer not for for a legal racket. I don't thing anyone would me bothered by this.
Once again Intel has a good $300 low power CPU for $250 devices. I wonder if they will still be perplexed by why these are not flying off the shelves from OEM orders.
Well said. I picked up a tablet for my son a couple of months ago, it will do 95% of what a $600 tablet can do at 80% of the performance for $120. Intel is completely misunderstanding the market and it's direction.
ARM tablet sales are slipping, but I think their slowing sales are an indicator of an overall tepid market for the tablet form factor which includes x86 devices. Releasing a high cost CPU into that market will not net Intel many sales. Maybe the company isn't concerned about that and maybe they're seeking higher margins in a small market, but it does superficially look like the MSRP is unrealistic.
You missed the point. Instead of offering a sane range of prices and performance levels in their Core and i product line that fill out all the WHOLE range of market needs they insist on keeping the atom around.
They could have the low power versions in low end devices and impress customers with performance. Instead people get a $100 to $270 device with a just passable Atom and it tarnishes the whole Wintel industry's image.
Kaby Lake was supposed to add native USB 3.1 Generation 2 (10 Gbit/s) support, but the first CPUs did show no such feature. What about the other chips that are coming out in January?
Wouldn't that be dependant on the chipset ? I am not sure if upcoming 200 series chipsets that will accompany Kaby Lake will support it or not, but it wouldn't be part of the CPU spec.
This must be 2nd most boring release from Intel after Skylake release of course. All i care is about 10 core Broadwell-E i am running and future CPU in that area.
They arent exciting from one year to the next, but what Intel is getting from a 15w CPU is impressive. 5 years ago, a standard "run of the mill" laptop had a dual core i5 @2.5ghz pulling 35 watts. Today's Skylake is still a dual core i5 @2.5ghz, is 15-25% faster depending on the test, but it only pulls 15 watts. Kaby Lake pushes that even further.
The same is true for standard desktop CPU's... The top speed isnt getting too much better, but where a Core i5/i7 k CPU was 5 years ago vs now is alot cooler and more power efficient. A Skylake Core i7 6700k hardly needs any special cooling at all, even when overclocking. You can get any old cheap air cooler and overclock the hell out of it, where that type of thing used to require an expensive/elaborate setup. The improvements are there in every category, its just not focused on top speed.
It's yet another in a long string of mediocre releases following Sandy Bridge. It might get a bit more interesting in January when more of the Kaby Lake product stack is released, but without something remarkable like more Iris Pro-equipped SKUs, I think we're in for more of the same performance increases. It's not a bad thing, but it is fairly routine.
Did Intel say why they aren't supporting DDR4 on any of the 4.5W processors? I thought less power than DDR3 was supposed to be one of its selling points.
It might be related to the memory controller (more power draw?) or an uptick in yield (+2%?) if you discount the 'DDR4' part. Mind you, it is essentially the same silicon, sharing parts with DDR3L, so it could just be a product differentiation play.
I was hoping for December 2016 release for Kaby Lake, now they stated it for January 2017. probably it's gonna hit the shelves at late-January/early-February. I'm gonna pull the trigger, I won't holding out for Kaby Lake, even though I badly needed that 10-bit hevc support.
CPUs aren't launched in December, that's prime time for discounts and deals. If you're launching a profitable part, it gets pushed until after the holiday. It's been that way for a long time. You only launch in December if it's a really cheap part sold really cheaply and you're only after market share rather than profit.
The "Core m" (rebranded i5/i7) looks really impressive. The 2017 MacBook should be a nice upgrade. Hopefully we see more Windows OEMs using the 4.5W chips. They are really good.
That was exactly what I was thinking. 20% better power consumption combined with better thermals and performance should make new fanless devices far more powerful. Let's hope OEMs don't slaughter the TDP.
I was on the way to buy a i7 6700 (Mini-Itx), must i wait for the desktop release in january ? does it worth it as the desktop version will feature iris graphics and not the HD 530 graphics...
i have a GT 640 Nvivida actually that i use it only Video editing, open GL and Cuda encoding. i'm curious to know if a i7 6700 is more powerful with quicksync than the 384 cores of Cuda that i have, if yes, as the future desktop kaby lake i5/i7 trough the Iris HD graphics will certainly do better, maybe it would worth it ? as i'm trying to buil a machine with no more graphic card... any suggestion ?
IF. There's nothing stopping AMD ramping up their prices if they have a competitive product, though I doubt we'll see them as they were in the early to mid-2000s. I wouldn't be surprised to see the absolute high end at about $350-$400 with the best APUs in the $200-$250 range (AMD has released $160+ APUs before so throwing out something with 50%+ faster CPU and 30-40% better GPU would probably be worth $200+).
I saw another review showing way better throttling, power consumption, performance etc for the 7500U vs 6500U. This looks like a boring release but if the thermals are that much better this is actually an exciting release for laptops and 2 in 1's. They also upped the boost clocks a lot, it might be a good overclocker on the desktop side. Of course it's all wait and see but this might be a good release even if there weren't any IPC gains.
It doesn't seem like temps have been the limiting factor with Skylake overclocking, so better thermals may not necessarily be an indicator of potential overclocking performance. My understanding is the z-height issues were preventing soldering the lid, (not sure how they were able to remedy that for DC) so whether or not KL is a solid overclocker will have more to do with other factors than temps.
VP9 is only partially supported in terms of encoding and decoding (Intel was kinda hazy on that). I force H.264 on Chrome simply because I don't have bandwidth issues and shouldn't need VP9.
"We are told that transistor density has not changed, but unless there’s a lot of spare unused silicon in the die for the wider pitch to spread, it seems questionable."
With the taller fins they're increasing the drive current which each fin provides. A transistor uses several fins, depending on how much current it has to drive. By using taller fins Intel can get away with fewer of them and hence can place them further apart and reach the same transistor density.
ok so lots of marketing BS in this article, not anand's fault, rather intel's.
So what i'm seeing is virtually no IPC improvements, not even 1-2%, all gains are coming from optimized max turbo clockspeeds. GPU is what see's the majority of the improvements, however there are a few fixed function blocks on the CPU that are new for VP9.
Intel is doing mobile first yet again. I dont blame them, those mobile chips are waay overpriced and have the most profit margin. $300 core M CPU anyone?!?!, or how about a $450 quad core mobile i7. Yeah same ol tricks coming from intel. Won't be long before intel is irrelevant, i'd say within 10 years, mostly due to AMD's recent IPC gains and ARM taking a bigger part of the market if the Intel Fabbed Apple iPhone Chips rumors pann out. Some say apple will move all their mobile to arm, and they will all be fully custom by apple, but made by 10nm and 7nm intel foundries.
I disagree on most points but I have to agree with you with Intel's current strategy. I know they can release the 10nm parts this year. It might painful and expensive and works for the near-term, but as ARM continues to build bigger and faster parts, the gap is becoming smaller throughout the years.
Next thing they'll know (Intel), people will start using PCs based on ARM. Microsoft's strategy of releasing and maintaining an ARM based OS (Windows phone) wasn't bad as it looks now. Intel is ignoring the mainstream/consumer market who funds the R&D and tools required for the lucrative products in the server/enterprise market.
Well, Windows on ARM didn't work out (RT), unlike Android on x86, which is good. But maybe with UWP Microsoft will have another go -- they've opened pathways to get legacy software on to UWP, which would enable processing on ARM.
No it doesn't. Full UWP-Apps will run on any platform, but legacy software (i.e. x86 software) that has just been packed as UWP App (so it can be sold via the Windows Store) won't be able to run on anything but x86 platforms.
For me, looking to buy a new laptop end of this year, is hoping that this will drive Skylake laptops prices down further. However given that Kaby Lake is probably releasing to retail Q1 next year, I don't think I will see much in that beyond the savings from the holiday pricing.
So basicly this iteration of intel cpus opens a competitive opening for AMD. This is first time in 10 years that intel doesnt have some imrovment in ipc. Will have to wait for the benchmarks and reviews, but this is a chance for AMD. AMD needs and must deliver so we cosumers have some healty competition.
> There are a couple of reasons for this - the one Intel discussed with us is that it comes down to performance and OEM requests. An i5 in a 4.5W form factor (or as high as 7W in cTDP Up) performs essentially on par with the KBL-U i5 parts, and OEM customers requested that the naming scheme change to reflect that
What a load of bull. More misleading marketing tricks from Intel once again.
The thing is Core i5 should've never been in the 4.5W range - when it was it just meant that it was either heavily throttled, or it would overheat.
So Intel created M5 to say that it's a lower-performance product that matches the 4.5W TDP better.
But NOW, Intel is renaming m5 to Core i5 to trick people into thinking they get similar performance as they do at higher TDPs. And saying "but Core i5 at 4.5W is the same as m5" - yeah but Core i5 (with a certain level of performance worthy of i5) NEVER WORKED WELL at 4.5W - that's why it needed higher TDP.
This is the same cr@p Intel pulled when it renamed mobile Atom chips to Celeron and Pentium.
So the new MacBook Pro won't be here until January? Iris Graphics-equipped models only then? Or do you think Apple will announce a non-Iris equipped variant first this year, then more powerful models next year?
Wish they already released, yesterday my Sandy Bridge died (likely PSU+Motherboard), whould have prefered replacing it with Kaby lake + Z270 rather than Skylake + Z170.
If i3, i5, and i7 processors are all dual core processors with Hyper-Threading, equivalent to what used to be called i3 processors, does that mean that when the rest of the range is launched, quad-cores without Hyper-Threading, formerly i5, would be i9, i11, and maybe i14, and quad cores with Hyper-Threading, formerly i7, would now be i15, i17, and i19?
Does anyone know if both HDR10 and Dolby Vision format types are supported by the new 7th-gen media block? I see it has added support for 10-bit HEVC decoding and HDR to SDR tone mapping, but I don't see any information on supported HDR standards. The open standard does up to 10-bit static while the proprietary one does up to 12-bit dynamic.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
129 Comments
Back to Article
hansmuff - Tuesday, August 30, 2016 - link
Does any of the new fixed-function logic that is part of the GPU get to work when I use a discrete GPU instead of the integrated?I remember that on my old SB chip, the GPU just was turned off because I use discrete. How have things changed, if at all?
Ryan Smith - Tuesday, August 30, 2016 - link
Typically you'll be using the dGPU for video decoding since it's closer to the display pipeline. However you can totally use QuickSync for video encoding, even with a dGPU.hansmuff - Tuesday, August 30, 2016 - link
Ah yes, QuickSync in particular was a question for me. While NVENC certainly does do a fine job, if I have a hardware encoder laying dormant in the CPU, it might as well do stream encoding for me :)fabarati - Tuesday, August 30, 2016 - link
I just messed about with NVENC, QSVEncC and x265 when ripping som DVDs. X265 still gives the best quality and size. With a i5-6500, the encoding speed wasn't all that, at around 65 fps. Of course, QSVEncC was closer to 200 fps and NVENC (GTX 1070) clocked in at 1300-2000 FPS.Quality and size of the file are of course the opposite, with x265 looking the best and being the smallest, then QSVEncC and finally NVENC.
Guspaz - Tuesday, August 30, 2016 - link
Can you? Last I looked, that required enabling both the dGPU and iGPU simultaneously (and simply not plugging a monitor into the iGPU). Attempts to enable the iGPU while having a dGPU plugged in on my Ivy Bridge resulted in Windows not booting.nathanddrews - Tuesday, August 30, 2016 - link
I can't speak for your system, but my Z77 motherboard features Virtu multi-GPU support that allows me to use Quick Sync while having my monitor plugged into my dGPU. You have to activate both IGP and dGPU in BIOS, then load both drivers. It worked for me under W7 and W10.Guspaz - Tuesday, August 30, 2016 - link
Errm, you've got dedicated hardware specifically for the purpose of supporting multiple GPUs (the Lucid Virtu), so that's not really a typical example.extide - Tuesday, August 30, 2016 - link
Lucid Virtu is all softwareGigaplex - Tuesday, August 30, 2016 - link
Last I checked, it requires motherboard support. You can't just install some software and expect it to work. That's what they meant by dedicated hardware.CaedenV - Tuesday, August 30, 2016 - link
yep, there is a chip that enable the virtu stuff. It is little more than a soft-switch to route traffic to the right chip, but still required for the software to work.CaedenV - Tuesday, August 30, 2016 - link
yep, virtu never worked on my SB z68 motherboard, but I upgraded to a z77 board (same SB CPU) to support TRIM over RAID0/1 for my SSDs and was happy to find that Virtu worked as advertised on the newer board. Used it to rip DVDs and BluRays for a few years, but more recently moved to a newer dGPU as I re-ripped my collection to h.256 to save on server space.inighthawki - Tuesday, August 30, 2016 - link
It is typically a BIOS option to enable/disable the iGPU in the presence of a discrete GPU. Enabling it should have no ramifications, though, and the iGPU should simply show up as another video adapter on the system (no different than if you plug in an AMD and NVIDIA card at the same time). I've done this before on my machines and I've never had windows fail to boot - what configuration do[/did] you have? Perhaps the discrete GPU driver attempted to configure the system as a hybrid configuration (e.g. like on laptops) but it was not compatible for some reason?Guspaz - Tuesday, August 30, 2016 - link
It was the bios setting that I attempted to enable. It's an i7-3770 on a Z77 motherboard and an nVidia GPU (670 at the time) on what was originally Windows 7. Windows 8 didn't help, and I've not tried it with my current GPU (970) or OS (Win10).inighthawki - Tuesday, August 30, 2016 - link
hmm odd. I haven't ever tried on Windows 7 (or ivy bridge, for that matter), but my haswell works flawlessly alongside my GTX 780Ti in windows 10. I would suspect it could just be a driver issue.CaedenV - Tuesday, August 30, 2016 - link
Ivy still required lucid to work (my wife's desktop does not have it, so I can pick one or the other). But newer boards came with the feature as standard.Still, enabling the onboard graphics should not bring instability. Must be a bad iGPU or a driver issue at play there. enabling the iGPU should just turn off the dGPU in systems without virtu.
techieboi - Thursday, September 1, 2016 - link
I doubt it is possible as yet. But there was a similar hack I read on http://gadgetspost.com, you could try. Not sure though.Shadowmaster625 - Tuesday, August 30, 2016 - link
PAO? More like TTM. Tick. Tock. Milk.Drumsticks - Tuesday, August 30, 2016 - link
To be fair, a 12% performance boost on mobile is pretty fair, considering it's higher than some new generations have managed with new uArchs or entirely new process nodes.tipoo - Tuesday, August 30, 2016 - link
Tick Tock ToeTEAMSWITCHER - Tuesday, August 30, 2016 - link
Can someone please explain to me Intel's product introduction strategy? Why would they want to sell the lower-cost non-Iris Pro Kaby Lake chips first? This makes no sense...essentially diluting the demand for the "better" chips by putting the "lesser" chips on the markets MONTHS before hand.I understand that PC makers love these "bread and butter" = "ho-hum" SKU's, and can't wait to sell "Kaby Lake" versions of products they are selling right now, but WOW! One would think that Intel would at least attempt to improve profitability by making the "better" (Iris PRO) chips available at the same time. I just don't see the logic...
rhysiam - Tuesday, August 30, 2016 - link
They speculate on page 4 whether some retooling is required for the new 14nm+ process, and therefore whether perhaps only one or two fabs are going to be up and running early. If Intel has limited output it makes sense to direct early production to the valuable CPUs per mm2 of wafer... which is precisely these standard U and Y series processors (maybe some Xeon CPUs are higher earners, but the platform isn't ready yet). Mobile Iris Pro CPUs and most desktop processors require much more die area... meaning less output.All speculation at this point, but it is a possible answer to your question.
TEAMSWITCHER - Tuesday, August 30, 2016 - link
Ok, that makes sense. I always thought they were the same chips - with the Iris Pro features disabled. But if they are smaller dies then the bottom up approach could help to perfect the process before switching to the larger dies - potentially reducing the number of defective chips. Thanks.A5 - Tuesday, August 30, 2016 - link
It's yield and profit concerns. Doing the big chips first means they have to throw more of them away, which cuts down their profits.bryanlarsen - Tuesday, August 30, 2016 - link
Smaller chips yield dramatically better when defects are high. Imagine a die that holds 100 large chips and there are 100 defects on the die. Some of the chips will have more than one defect so there will be a few chips that are good, perhaps 15-25 or so. Now imagine that you are putting 200 smaller chips on the same die with 100 defects. You'll get at least 100 good chips, perhaps 110-120. So unless you can sell the large chip for 6-8x the cost of the small chip, it's more profitable to start with the small chips when defect rates are high.retrospooty - Tuesday, August 30, 2016 - link
The answer to almost any question like that is - they think it will be more profitable for them. They arent just thinking about the latest fastest thing, they are thinking about production, orders, volume and stock levels.quadrivial - Tuesday, August 30, 2016 - link
The answer is most likely ARM.Intel has zero competition in the high-end CPU front. People who can't wait will pay just as much for last-gen chips because that's all that's on the market. People who can wait won't mind a few months (and don't really have an option). In contrast, Intel lives in fear of Qualcomm, Samsung, or AMD announcing an ARM chip competitive with x86. Taking a more aggressive stance and coming to market as soon as possible is what Intel shareholders will want to see.
CaedenV - Tuesday, August 30, 2016 - link
True story. I can cry all I want about wanting a faster desktop chip, but the simple fact of the matter is that I will be forced to wait for Intel to release one because I am not tempted to move to AMD any time soon.But that the same time there are hundreds of schools debating between ARM and Intel chromebooks and chromeboxes, and whoever offers the lowest price is going to win the day. Releasing the smaller cheaper chips ASAP will prevent loosing those sales to ARM.
doggface - Wednesday, August 31, 2016 - link
Only problem with your theory is these chips are priced at well above the cost of a Chromebook processor. We are talking $2-400 for these chips. Arm processors can be less than $50. Not even the same league.Intel has ceded the low end of the market to Arm with the discontinuation of atom.
fanofanand - Wednesday, August 31, 2016 - link
Intel charges more for the chip than most chromebooks cost.Meteor2 - Wednesday, August 31, 2016 - link
None of this stuff (KBL) competes with ARM, it's aimed squarely at Apple. Broxton is the ARM competitor.Meteor2 - Wednesday, August 31, 2016 - link
D'oh, Apollo Lake, not Broxton.ianmills - Tuesday, August 30, 2016 - link
It is interesting that the desktop cpu will be released at the same time as zen. Maybe they think zen might be competitive and are waiting to set the final clocks once zen performance is known?CaedenV - Tuesday, August 30, 2016 - link
More likely Intel has been afraid to release anything that would put AMD out of business entirely... but at the same time they are not about to let them have a foothold on the market ever again.witeken - Tuesday, August 30, 2016 - link
Here is a more up to date graph of Intel's 14nm yield than the one shown on page 1: http://www.fudzilla.com/images/stories/2016/Januar...As you can see, Intel didn't meet its forecast of the AnandTech graph. They were quite close in H2'14, but then the yield learning slowed down considerably. In the image of AT, they forecasted parity in H1'15, but in their latest graph (mine is from november '15), they forecasted mid-2016 for yield parity compared to 22nm. My guess from what I've heard from them is that this is fairly accurate. Yields should be pretty healthy by now.
lilmoe - Tuesday, August 30, 2016 - link
Ridiculous MSRPs as usual.rahvin - Tuesday, August 30, 2016 - link
As always, but I would consider the lack of built-in USB 3.1 a stunning failure. KL was under design when the 3.1 standard was finalized. Mark my words, but this time next year every single phone tablet is going to be using Type-C. Though you can use the port without the 3.1 spec it's a stunning failure on Intel's part to not integrate it when they are a key member of the USB forum.Personally I think they are doing it because they try to push people to thunderbird but it makes their default product a failure that needs another chip to provide what will be default functionality in 12 months.
beginner99 - Wednesday, August 31, 2016 - link
USB-C and USB 3.1 are not the same thing. You can use C-connector with USB 3.0 or old connector with USB 3.1doggface - Wednesday, August 31, 2016 - link
They could have supported USB3.1, but it is a bandwidth hungry monster that takes up die space. Instead they support pcie3x4 and oems can get/pay for the extra USB3.1 chip out of their own moneys. USB-c on the other hand can be hung of pretty much anything because it is just a connector.someonesomewherelse - Thursday, September 1, 2016 - link
The only way you can expect lower prices/better chips on desktop (i3 becoming 4c/8t, i5 8c/16t, i7 12c/24t and the no gpu chip with extra cores/cache desired by pc enthusiasts (lets name it i8 and give it 18c/36t and as much extra l3 cache you can fit on it with the space gained by the removal of the gpu... you could even have i6 (14c/28t) and i4 (6c/12t) chips using the same chips with defects on some of the cores/l3 and an i2 with 3c/6t or 4c/8t but with an extremely high clock rate (5GHz+ base)), no more random removal of some instructions (K class missing some of the virtualization for example), a turbo only limited by actual heat and power, mandatory ecc, 4 channel memory (which means that the high end now gets 8 channels), extra pcie lanes, unlocked multipliers and so on) is if Zen is actually faster than AMD marketing claims.So if you are a religious person start praying....
someonesomewherelse - Thursday, September 1, 2016 - link
Obviously at the same/lower price as current i3/i5/i7 chips.lilmoe - Saturday, September 3, 2016 - link
I couldn't give a rat's bottom how cheap Intel's chips become. If AMD gets similar performance at reasonable prices, then good-bye Intel for me.BillBear - Tuesday, August 30, 2016 - link
Given the new 14nm+ process, is it safe to speculate that the original problem with 14nm that delayed the hell out of Broadwell and outright killed some of the desktop versions of Broadwell was a power leakage problem?saratoga4 - Tuesday, August 30, 2016 - link
I don't think leakage was a huge problem. Yields appear to have been though. Intel has some slides explaining that they ramped slower than expected.Jumangi - Tuesday, August 30, 2016 - link
So possibly no real IPC gains in the desktop version? No point in waiting if your looking to upgrade then.wumpus - Tuesday, August 30, 2016 - link
Not for this. I keep looking at that die photo and wondering how hard it would be to replace some of that silly graphics bit with 2-4 cores. Maybe they will do it once zen ships. Maybe it would generate too much heat and won't work (I suspect the big multicore jobs have more cache/core area than these chips have (GPU+cache)/core). Maybe Intel will someday include a pony.Molor - Tuesday, August 30, 2016 - link
They kind of do that. They call it the extreme version and charge more for it. Graphics nodes tend to be more forgiving of flaws due to redundancy. AMD and NVidia usually disable a broken SM or two per chip for yields. It would be interesting to know if Intel does the same.shabby - Tuesday, August 30, 2016 - link
Pretty much, the 12% benchmark increase came from a 13% clock bump, same wattage though apparently.someonesomewherelse - Thursday, September 1, 2016 - link
Are you talking about single thread IPC or total chip? I think that single thread IPC improvements are going to (or already have) become too expensive for most applications and without programmer/compiler help. Things that vectorize well are probably the last area where large improvements are realistic. But this will either require great programmers that can actually utilize current (AVX2) and future SIMD instructions in their code + higher development costs or dropping support for older cpus (neither sounds good). Per chip IPC is probably easier but you still need good programmers/compilers or the use of multiple expensive applications at the same time (why not play a game while encoding 3 videos.... with enough cores/threads/cache/memory bw/io b this would work) .Clocks could still be increased if you are willing to accept high power consumption and expensive cooling.
However unless Zen is an extreme success Intel has no reason to do this since slow and expensive increases are more profitable and they have no reason to do this.
akmittal - Tuesday, August 30, 2016 - link
Any chance to see these in this year's macbook lineup.lilmoe - Tuesday, August 30, 2016 - link
As much as I dislike Apple's ways, sometimes they do things for a reason.1) Apple seems very hesitant to bother with Skylake because the various problems/bugs associated with the new architecture. It seams that Haswell/Broadwell is doing the job good enough for them and the "new features" aren't worth it in their general assessment. Macbooks are more media consumption/creation-centric and they're probably waiting on the new fix-function features.
2) Cost and profitability. If the above is true, it makes sense to stay with Haswell/Bradwell to maximize profit. Just like how they're using 3 gen old AMD graphics.
3) Lower than expected demand? Not so sure, but possible.
4) And I'm being hopeful here: *Zen* (and future HBM APUs). Keller has a history working "with" Apple, and they actually like his designs which play well with their OS(s). I'm being hopeful because Apple's marketing prowess and branding may be the beacon AMD (and the competitive market) needs to unleash the new platform and drive Intel to a corner forcing them to lower prices.
tipoo - Tuesday, August 30, 2016 - link
There was a Zen iMac APU rumor, but I doubt the Macbooks would get it unless AMD pulled a real coup with Zen power draw.KPOM - Tuesday, August 30, 2016 - link
New MacBook Pros are expected in October, though. Those would be Skylake unless Apple has plans to use the 4.5W chips TDP upped to 7W, which I highly doubt (maybe in an updated MacBook Air, but I doubt that the Air will get an update).nils_ - Thursday, September 8, 2016 - link
Seems Apple hasn't announced any new Macbook Pros... On the other hand, they probably get a lot more profit on selling outdated hardware.BillBear - Tuesday, August 30, 2016 - link
According to Paul Thurrott, Microsoft was not at all happy with the general bugginess of Skylake.>while Intel has never formally confirmed this, my sources at Microsoft and elsewhere have told me that Skylake, the original “tock” release following Broadwell, was among the buggiest of chipsets that Intel has ever released. Problems with Skylake are at the heart of most of the issues that Microsoft has seen with its Surface Pro 4 and Surface Book devices, and it’s fair to say that the software giant now regrets delivering the very first Skylake-based devices into the market in late 2015.
https://www.thurrott.com/hardware/73161/intel-star...
KPOM - Tuesday, August 30, 2016 - link
So don't expect them to rush out a Surface Pro 5 or Surface Book 2 until they are comfortable there are no bugs?iwod - Wednesday, August 31, 2016 - link
As much as I dislike Apple's ways, sometimes they do things for a reason.Not sometimes, but EVERYTHING. With Apple, if you dont understand why they did something, it is highly likely you overlook rather then something they skip over.
lilmoe - Saturday, September 3, 2016 - link
Everyone does everything for a "reason". Most times than not, I don't like those reasons.I should have clarified; sometimes they do things for a *good* reason. Keyword here is "sometimes", IMO.
jhoff80 - Tuesday, August 30, 2016 - link
Interesting that it says that 4k resolution support is new on Skylake and also requires additional cooling, considering I have a tablet from Cube in China that is a Skylake Core M, fanless, and supports 4k over USB-C/DisplayPort Alternate Mode. Though then again, based on what I'm looking at in the UEFI settings, it doesn't appear like Cube followed any of Intel's guidelines either.jhoff80 - Tuesday, August 30, 2016 - link
Wait, I confused myself a little there thinking it said new on Kaby Lake. Just the additional cooling part is weird though.rogelio - Tuesday, August 30, 2016 - link
@ Ian Cutress & Ganesh T SI'm a little confused by the below, can you clarify:
One of the disappointing aspects from Skylake that has still not been addressed in Kaby Lake-U/Y is the absence of a native HDMI 2.0 port with HDCP 2.2 support
My understanding was that retail solution releases of Kaby lake (e.g. the forthcoming Kaby Lake NUC) would have HDMI 2.0 (not 1.4) support natively. Do you mean that the only way to gain HDMI 2.0 support is to use a converted on the DP 1.2 port? If so, I thought current manufacturers of converters are limited in color bit depth and can't do true 4k 60hz at full color depth.
nathanddrews - Tuesday, August 30, 2016 - link
You'll probably have to wait for the next chip. It's too bad there's no native 2.0 and no DP 1.3/1.4 like AMD and NVIDIA have in their new GPUs. I still haven't found a DP to HDMI 2.0 adapter that works well for 4K 4:4:4 and adding $25-50 for an adapter to the cost of the chip is stupid. /rantrogelio - Tuesday, August 30, 2016 - link
Wonder if they'll have on-board ThunderBolt 3. If so, eschewing the on-board HDMI port and getting a thunderbolt to multi-display USB 2.0a adapter would suffice (if such adapters are yet on the market). I guess we'll have to see what port options exist on the upcoming NUC's (since I'm looking to upgrade my first gen NUC for something fully 4K ready).kpb321 - Tuesday, August 30, 2016 - link
The additional hardware that is mentioned would be on the motherboard so it is an integrated HDMI 2 port from the end users perspective of not having to use an external dongle/adapter but it isn't from the manufacturer's/BOM (bill of material) perspective as they have to add an extra chip to support it instead of wiring it directly to the chip.someonesomewherelse - Thursday, September 1, 2016 - link
Well they could just drop hdcp and tell the MAFIA to get lost since they want the best experience for there customer not for for a legal racket. I don't thing anyone would me bothered by this.acme64 - Tuesday, August 30, 2016 - link
it's kah-bee, like Gaby or Tabby. why would it be kay-bee?nathanddrews - Tuesday, August 30, 2016 - link
It's named after Kaby Lake in Canada, which is pronounced like "baby".woggs - Tuesday, August 30, 2016 - link
It's named for Kabinakagami Lake, which is kah-bee-nah-kah=gah-mee and shortened to kah-bee lake by the locals.woggs - Tuesday, August 30, 2016 - link
http://www.pronouncekiwi.com/Kabinakagami%20Lakenoeldillabough - Tuesday, November 1, 2016 - link
I fish on that lake and everyone I know says "Kay bee nah kah gah mee"abrowne1993 - Tuesday, August 30, 2016 - link
"It's just like this other thing that looks sort of similar but is completely unrelated."Why are there still people who think English works like this?
Gunbuster - Tuesday, August 30, 2016 - link
Once again Intel has a good $300 low power CPU for $250 devices. I wonder if they will still be perplexed by why these are not flying off the shelves from OEM orders.fanofanand - Tuesday, August 30, 2016 - link
Well said. I picked up a tablet for my son a couple of months ago, it will do 95% of what a $600 tablet can do at 80% of the performance for $120. Intel is completely misunderstanding the market and it's direction.lilmoe - Tuesday, August 30, 2016 - link
No they're not "misunderstanding"... There's just no competition, and ARM tablets have lost steam, so why NOT?BrokenCrayons - Tuesday, August 30, 2016 - link
ARM tablet sales are slipping, but I think their slowing sales are an indicator of an overall tepid market for the tablet form factor which includes x86 devices. Releasing a high cost CPU into that market will not net Intel many sales. Maybe the company isn't concerned about that and maybe they're seeking higher margins in a small market, but it does superficially look like the MSRP is unrealistic.fanofanand - Tuesday, August 30, 2016 - link
Swinging from contra-revenue schemes to top-tier premium pricing seems overly optimistic on Intel's part.Meteor2 - Wednesday, August 31, 2016 - link
It's for $900 devices, like the Surface Pro.Gunbuster - Wednesday, August 31, 2016 - link
You missed the point. Instead of offering a sane range of prices and performance levels in their Core and i product line that fill out all the WHOLE range of market needs they insist on keeping the atom around.They could have the low power versions in low end devices and impress customers with performance. Instead people get a $100 to $270 device with a just passable Atom and it tarnishes the whole Wintel industry's image.
Atom $30, Core $300. Derp Derp Derp
fera79 - Tuesday, August 30, 2016 - link
Kaby Lake was supposed to add native USB 3.1 Generation 2 (10 Gbit/s) support, but the first CPUs did show no such feature. What about the other chips that are coming out in January?retrospooty - Tuesday, August 30, 2016 - link
Wouldn't that be dependant on the chipset ? I am not sure if upcoming 200 series chipsets that will accompany Kaby Lake will support it or not, but it wouldn't be part of the CPU spec.karma77police - Tuesday, August 30, 2016 - link
This must be 2nd most boring release from Intel after Skylake release of course. All i care is about 10 core Broadwell-E i am running and future CPU in that area.retrospooty - Tuesday, August 30, 2016 - link
They arent exciting from one year to the next, but what Intel is getting from a 15w CPU is impressive. 5 years ago, a standard "run of the mill" laptop had a dual core i5 @2.5ghz pulling 35 watts. Today's Skylake is still a dual core i5 @2.5ghz, is 15-25% faster depending on the test, but it only pulls 15 watts. Kaby Lake pushes that even further.karma77police - Tuesday, August 30, 2016 - link
Honestly, don't care about laptops either as i find them quite useless. But that is me, personal preference. I understand your point.retrospooty - Tuesday, August 30, 2016 - link
The same is true for standard desktop CPU's... The top speed isnt getting too much better, but where a Core i5/i7 k CPU was 5 years ago vs now is alot cooler and more power efficient. A Skylake Core i7 6700k hardly needs any special cooling at all, even when overclocking. You can get any old cheap air cooler and overclock the hell out of it, where that type of thing used to require an expensive/elaborate setup. The improvements are there in every category, its just not focused on top speed.BrokenCrayons - Tuesday, August 30, 2016 - link
It's yet another in a long string of mediocre releases following Sandy Bridge. It might get a bit more interesting in January when more of the Kaby Lake product stack is released, but without something remarkable like more Iris Pro-equipped SKUs, I think we're in for more of the same performance increases. It's not a bad thing, but it is fairly routine.DanNeely - Tuesday, August 30, 2016 - link
Did Intel say why they aren't supporting DDR4 on any of the 4.5W processors? I thought less power than DDR3 was supposed to be one of its selling points.Ian Cutress - Tuesday, August 30, 2016 - link
It might be related to the memory controller (more power draw?) or an uptick in yield (+2%?) if you discount the 'DDR4' part. Mind you, it is essentially the same silicon, sharing parts with DDR3L, so it could just be a product differentiation play.iranterres - Tuesday, August 30, 2016 - link
Latency related issues perhaps? Just my guess...TachyonParticle - Tuesday, August 30, 2016 - link
I was hoping for December 2016 release for Kaby Lake, now they stated it for January 2017. probably it's gonna hit the shelves at late-January/early-February. I'm gonna pull the trigger, I won't holding out for Kaby Lake, even though I badly needed that 10-bit hevc support.Ian Cutress - Tuesday, August 30, 2016 - link
CPUs aren't launched in December, that's prime time for discounts and deals. If you're launching a profitable part, it gets pushed until after the holiday. It's been that way for a long time. You only launch in December if it's a really cheap part sold really cheaply and you're only after market share rather than profit.KPOM - Tuesday, August 30, 2016 - link
The "Core m" (rebranded i5/i7) looks really impressive. The 2017 MacBook should be a nice upgrade. Hopefully we see more Windows OEMs using the 4.5W chips. They are really good.negusp - Tuesday, August 30, 2016 - link
That was exactly what I was thinking. 20% better power consumption combined with better thermals and performance should make new fanless devices far more powerful.Let's hope OEMs don't slaughter the TDP.
Martyprod - Tuesday, August 30, 2016 - link
I was on the way to buy a i7 6700 (Mini-Itx), must i wait for the desktop release in january ? does it worth it as the desktop version will feature iris graphics and not the HD 530 graphics...i have a GT 640 Nvivida actually that i use it only Video editing, open GL and Cuda encoding. i'm curious to know if a i7 6700 is more powerful with quicksync than the 384 cores of Cuda that i have, if yes, as the future desktop kaby lake i5/i7 trough the Iris HD graphics will certainly do better, maybe it would worth it ? as i'm trying to buil a machine with no more graphic card...
any suggestion ?
vladx - Thursday, September 1, 2016 - link
Only Iris Pro would beat a GT 640 so it's not more powerful since a 6700 has only a HD 530 iGPU.karma77police - Tuesday, August 30, 2016 - link
I hope AMD new Zen performs same as 8 Core Broadwell-E and if they get pricing right they will make both Broadwell-E and this Kaby Lake crap obsolete.negusp - Tuesday, August 30, 2016 - link
Don't see it happening, unfortunately. I predict Haswell-like performance with Zen. However, the Zen APU's for mobile should have killer iGPUs.silverblue - Tuesday, August 30, 2016 - link
IF. There's nothing stopping AMD ramping up their prices if they have a competitive product, though I doubt we'll see them as they were in the early to mid-2000s. I wouldn't be surprised to see the absolute high end at about $350-$400 with the best APUs in the $200-$250 range (AMD has released $160+ APUs before so throwing out something with 50%+ faster CPU and 30-40% better GPU would probably be worth $200+).andrewaggb - Tuesday, August 30, 2016 - link
I saw another review showing way better throttling, power consumption, performance etc for the 7500U vs 6500U. This looks like a boring release but if the thermals are that much better this is actually an exciting release for laptops and 2 in 1's. They also upped the boost clocks a lot, it might be a good overclocker on the desktop side. Of course it's all wait and see but this might be a good release even if there weren't any IPC gains.fanofanand - Tuesday, August 30, 2016 - link
It doesn't seem like temps have been the limiting factor with Skylake overclocking, so better thermals may not necessarily be an indicator of potential overclocking performance. My understanding is the z-height issues were preventing soldering the lid, (not sure how they were able to remedy that for DC) so whether or not KL is a solid overclocker will have more to do with other factors than temps.fallaha56 - Tuesday, August 30, 2016 - link
has anyone actually made VP9 acceleration work on their Skylake system?!i certainly haven't on my SurfacePro4 -with the latest driver from MS...
negusp - Tuesday, August 30, 2016 - link
VP9 is only partially supported in terms of encoding and decoding (Intel was kinda hazy on that). I force H.264 on Chrome simply because I don't have bandwidth issues and shouldn't need VP9.Arbie - Tuesday, August 30, 2016 - link
How did we get to 7 pages of comments and no flame war yet? Who's not here?fanofanand - Tuesday, August 30, 2016 - link
Vlad and ddriver. Once they get here all hell will break loose like usual.th3z3r0 - Tuesday, August 30, 2016 - link
Seems like a decent upgrade for notebooks and for 4k videos.http://www.notebookcheck.net/Kaby-Lake-Core-i7-750...
MrSpadge - Tuesday, August 30, 2016 - link
"We are told that transistor density has not changed, but unless there’s a lot of spare unused silicon in the die for the wider pitch to spread, it seems questionable."With the taller fins they're increasing the drive current which each fin provides. A transistor uses several fins, depending on how much current it has to drive. By using taller fins Intel can get away with fewer of them and hence can place them further apart and reach the same transistor density.
negusp - Tuesday, August 30, 2016 - link
Doh! There goes my hopes for proper Skylake support on linux.Morawka - Tuesday, August 30, 2016 - link
ok so lots of marketing BS in this article, not anand's fault, rather intel's.So what i'm seeing is virtually no IPC improvements, not even 1-2%, all gains are coming from optimized max turbo clockspeeds. GPU is what see's the majority of the improvements, however there are a few fixed function blocks on the CPU that are new for VP9.
Intel is doing mobile first yet again. I dont blame them, those mobile chips are waay overpriced and have the most profit margin. $300 core M CPU anyone?!?!, or how about a $450 quad core mobile i7. Yeah same ol tricks coming from intel. Won't be long before intel is irrelevant, i'd say within 10 years, mostly due to AMD's recent IPC gains and ARM taking a bigger part of the market if the Intel Fabbed Apple iPhone Chips rumors pann out. Some say apple will move all their mobile to arm, and they will all be fully custom by apple, but made by 10nm and 7nm intel foundries.
zodiacfml - Tuesday, August 30, 2016 - link
I disagree on most points but I have to agree with you with Intel's current strategy. I know they can release the 10nm parts this year. It might painful and expensive and works for the near-term, but as ARM continues to build bigger and faster parts, the gap is becoming smaller throughout the years.Next thing they'll know (Intel), people will start using PCs based on ARM. Microsoft's strategy of releasing and maintaining an ARM based OS (Windows phone) wasn't bad as it looks now. Intel is ignoring the mainstream/consumer market who funds the R&D and tools required for the lucrative products in the server/enterprise market.
Meteor2 - Wednesday, August 31, 2016 - link
Well, Windows on ARM didn't work out (RT), unlike Android on x86, which is good. But maybe with UWP Microsoft will have another go -- they've opened pathways to get legacy software on to UWP, which would enable processing on ARM.tobi1449 - Tuesday, September 6, 2016 - link
No it doesn't. Full UWP-Apps will run on any platform, but legacy software (i.e. x86 software) that has just been packed as UWP App (so it can be sold via the Windows Store) won't be able to run on anything but x86 platforms.Cliff34 - Tuesday, August 30, 2016 - link
For me, looking to buy a new laptop end of this year, is hoping that this will drive Skylake laptops prices down further. However given that Kaby Lake is probably releasing to retail Q1 next year, I don't think I will see much in that beyond the savings from the holiday pricing.Alien959 - Wednesday, August 31, 2016 - link
So basicly this iteration of intel cpus opens a competitive opening for AMD. This is first time in 10 years that intel doesnt have some imrovment in ipc. Will have to wait for the benchmarks and reviews, but this is a chance for AMD. AMD needs and must deliver so we cosumers have some healty competition.Krysto - Wednesday, August 31, 2016 - link
> There are a couple of reasons for this - the one Intel discussed with us is that it comes down to performance and OEM requests. An i5 in a 4.5W form factor (or as high as 7W in cTDP Up) performs essentially on par with the KBL-U i5 parts, and OEM customers requested that the naming scheme change to reflect thatWhat a load of bull. More misleading marketing tricks from Intel once again.
The thing is Core i5 should've never been in the 4.5W range - when it was it just meant that it was either heavily throttled, or it would overheat.
So Intel created M5 to say that it's a lower-performance product that matches the 4.5W TDP better.
But NOW, Intel is renaming m5 to Core i5 to trick people into thinking they get similar performance as they do at higher TDPs. And saying "but Core i5 at 4.5W is the same as m5" - yeah but Core i5 (with a certain level of performance worthy of i5) NEVER WORKED WELL at 4.5W - that's why it needed higher TDP.
This is the same cr@p Intel pulled when it renamed mobile Atom chips to Celeron and Pentium.
WPX00 - Wednesday, August 31, 2016 - link
So the new MacBook Pro won't be here until January? Iris Graphics-equipped models only then? Or do you think Apple will announce a non-Iris equipped variant first this year, then more powerful models next year?SydneyBlue120d - Wednesday, August 31, 2016 - link
The long road to realtime 4320p60 HEVC Main 4:4:4 12 and HDMI 2.1 :DTunrip - Wednesday, August 31, 2016 - link
Page 2, "Working with Intel, they pushed through a new BIOS for the NUC that kept the OPI at PCIe 3.0 x4 speeds"I don't understand who "they" is in this context?
nonoverclock - Thursday, September 1, 2016 - link
IntelAllan_Hundeboll - Wednesday, August 31, 2016 - link
All the Intel news fact sheets and no conclusion makes this article look like an Intel advertisementMj@uk - Friday, September 2, 2016 - link
What are the base and max frecency of the i7-7500u @ the oem configrable tdp settings?ie low 7.5w, high 25w.
HollyDOL - Friday, September 2, 2016 - link
Wish they already released, yesterday my Sandy Bridge died (likely PSU+Motherboard), whould have prefered replacing it with Kaby lake + Z270 rather than Skylake + Z170.karma77police - Friday, September 2, 2016 - link
In another words a shit.tipoo - Friday, September 2, 2016 - link
Use this to remember, Y = Core M because Y did they do such stupid crap. Same with Celeron covering both Broadwell and Braswell.SanX - Sunday, September 4, 2016 - link
Intel is in a total stall.New Moore's-Sanx law: doubling transistor count every 10 years at no performance gain. :-)
mjh483 - Sunday, September 4, 2016 - link
LOL 😂 The new naming scheme will create even more confusion for customers.jeffry - Monday, September 5, 2016 - link
Well, i bought a Skylake CPU last year, so wake me up when Cannon Lake goes on sale.barn25 - Wednesday, September 7, 2016 - link
Sigh still no GT3E gpus in the high end nor where its really needed. Intel wonders why sales are dropping.quadibloc - Saturday, September 10, 2016 - link
If i3, i5, and i7 processors are all dual core processors with Hyper-Threading, equivalent to what used to be called i3 processors, does that mean that when the rest of the range is launched, quad-cores without Hyper-Threading, formerly i5, would be i9, i11, and maybe i14, and quad cores with Hyper-Threading, formerly i7, would now be i15, i17, and i19?AuDioFreaK39 - Monday, September 12, 2016 - link
Does anyone know if both HDR10 and Dolby Vision format types are supported by the new 7th-gen media block? I see it has added support for 10-bit HEVC decoding and HDR to SDR tone mapping, but I don't see any information on supported HDR standards. The open standard does up to 10-bit static while the proprietary one does up to 12-bit dynamic.Fr4gFr0g - Wednesday, September 21, 2016 - link
How come Intel clearly only states "HEVC - 8 bit support" in its data sheet (page 27, chapter 2.2.3 Media Support")?! http://www.intel.com/content/www/us/en/processors/...