That's what I thought about when I read "TEN year anniversary". It certainly doesn't feel like it was yesterday...but it certainly feels as old as "last month" is in my mind and that's mostly thanks to i7s, FXs, IPS, SSDs and some other things that proved to be more or less of a landmark in tech history.
I just realized I have an old HP desktop with a C2D E6400 that will turn 10 in a few months and it's still humming along nicely every day. It ran XP until this May when I switched it to Win10 (and a brand new SSD). The kind of performance it offers in day to day work even to this day amazes me and sometimes it even makes me wonder why people with very basic workloads would buy more expensive stuff than this.
Well my first computer had a 6510 running at 1 MHz. Funnily enough, I never owned a Core 2 CPU. I had an AM2+ motherboard and I went the route of the Athlon X2, Phenom and then Phenom II before finally switching to Intel with a Haswell i7.
Core 2 really changed the CPU landscape. For the first time in several years, Intel firmly beat AMD in efficiency and raw performance, something AMD has still not recovered from.
I just retired my dads E6750. It was actually still trucking along in a Asus Nvidia board that I had figured would be dodgy because the huge aluminum heatsink on the chipset was just nasty.. Made the whole system a heatscore. Damned if that thing didn't last right into 2016. Surprised the hell out of me.
I built my current PC back in 2007 using a Pentium Dual Core E2160 (the $65 bang for the buck king), which easily overclocked to 3 GHz, in an Abit IP35 Pro. Several years ago I replaced the Pentium with a C2D E8600. I'm still using it today. (I had the Q9550 in there for a while, but the Abit board was extremely finnicky with it and I found that the E8600 was a much better overclocker.)
Built a Q6600 rig for a mate just as they were going EOL and were getting cheap. It's still trucking, although I suspect the memory bus is getting flaky. Time for a rebuild, methinks.
And a monster NAS to store the likely hundreds of thousands of photos she's processed on it and which are stuck around on multiple USB HDDs in her basement.
It's not just CPUs that have moved on - who'd have thought ten years ago that a *good* four bay NAS that can do virtualisation would be a thing you could get for under £350/$500 (QNAP TS451) without disks? Hell, you could barely build even a budget desktop machine (just the tower, no monitor etc) for that back then.
Still have a E8400 rig that I use every day...with it o/ced to 4GHz, 8gb of DDR2-1066 and a OCZ Vertex 2 SSD plus it's 6mb of cache on a P45 mobo...it can hold its own to this day...easily. The E8000 series is one of the best 'future proof' cpus ever...next up imo will prove to be Sandy Bridge. Have a 2500K at 4.5GHz on a Z68 mobo, 16gb DDR3-2400 and a Samsung 850 Pro ssd...and now a GTX 1060...plays any game I want at 1080 and max quality...easily.
My E8400 is still my daily driver, 4x 2gb and an SSD swapped in later as the boot drive. Still runs great, except it tends to get bogged down by the TrustedInstaller and the Firefox memory leaks.
I've got an E8600 in an Abit IP35 Pro motherboard. I was having a hard time finding DDR2-1066 last I looked, so I settled for 800. With an SSD and 7870, it's surprising how well it still games. I don't think I'll upgrade the GPU again just due to the fact that I'm limited to PCI-e 2.
I just upgraded out of a Q6600 and 4GB DDR2 about 2 months ago and I admit that I was still kicking around the idea of leaving it alone as I was pulling the motherboard out of the case. I replaced it with a cheap AMD 860k and 16GB DDR3 which really hasn't done a lot to improve the system's performance. In retrospect, I think I could realistically have squeezed another couple of years out of it, but the motherboard's NIC was iffy and I really wanted reliable ethernet.
As for laptops, I've got a couple C2Ds kicking around that are perfectly adequate (T2310 & P8400) for daily use. I really can't see any point in replacing them just yet. Core was a good design through all its iterations.
Well the NIC wasn't the only reason, but it was the last in a series of others that I was already coping with that tipped the scales. The upgrade was under $200 for the board, processor and memory so it really boiled down to one weekend dinner out to a mid-range restaurant. It was worth it for more reliable Steam streaming and fewer VNC disconnects as that wired ethernet port is the only means by which I regularly interact with my desktop since it has no monitor and is crammed into a corner in my utility room.
Actually, I didn't give much of anything in the system a very close look before buying. I admittedly did about twenty minutes of research to make sure the 860k and the bottom feeder motherboard I'd picked would play nicely together before making a purchase. So the CPU & motherboard pair were the result of laziness and apathy rather than a preference for FM2+.
Ah ok gotcha, I just wanted to share that if you had a microcenter near you they sell FX 8320E's bundled with motherboards for 125 to 170 depending on which board you want to use. That can be quite the steal and a great base for a new cheap system once you bump the clocks on the 8320E.
Great chip. Only just upgraded from my QX6850 last month. Paired with a GTX 970 it was doing just fine running all new games maxed out at 1080p. Amazing for something nearly a decade old!!
You can't do DRAM in glasses, not in a real way. Since that's what mobile is by 2025. On-package DRAM is next year or soon not 2025. You can't have big cores either and you need ridiculous GPUs and extreme efficiency. Parallelism and accelerators, that's where computing needs to go, from mobile to server. We need 10-20 mm3 chips not 100cm2 boards. New NV memories not DRAM and so on. Will be interesting to see who goes 3D first with logic on logic and then who goes 3D first as the default in the most advanced process.
At the end of the day, even if the shrinking doesn't stop, 2D just can't offer enough for the next form factor. Much higher efficiency is needed and the size of a planar chip would be far too big to fit in the device while the costs would be mad.Much more is needed. For robots too.The costs and efficiency need to scale and with planar it's at best little.
On package DRAM seems to be a "forever coming" tech. AMD Fury-X basically shipped it, and it went nowhere. I'm guessing it will be used whenever Intel or IBM feel it can be used for serious advantage on some high-core server chip, or possibly when Intel want to build a high-speed DRAM cache (with high-speed-bus) and use 3dXpoint for "main memory".
The slow rollout is shocking. I'm guessing nvidia eventually gave up with it and went with tiling (see the Kanter demo on left, but ignore the thread: nothing but fanboys beating their chests).
My first Core 2 Duo was an E4400 that I bought in 2007 I believe, thing lasted me up to 2011 when I upgraded to an i5 2500k. I should've kept that C2D just for nostalgia's sake, I used it intermittently as a plex server and that thing worked great on FreeNAS. The only issue was it was really noisy and would get hot.
I've got a few old servers kicking around, all with valid Win server licenses, but due to UK electricity costs, just can't bring myself to have them running at home 24/7 just to serve a backup, or yet another Breaking Bad viewing session... :) which we can do locally now.
My old E6700 is still alive and kicking. I only just replaced it as my primary system when Devil's Canyon came along. Still use it for my four year old's "first computer."
Not a particle physicist, nor electrical engineer, so just some pie in the sky wondering here, but wouldn't it be possible to build transistors using carbon nanotubes, or light itself (using nano sized mirrors/interferometers, like DLP) or even basing the transistor gates off of protons/sub atomic particles?
I think a more interesting question is using glass as a substrate. Imagine printing nand, CPU, GPU, ram, and along the bezels of a smartphone.
That reduces a phone to six components: a display, a transducer for sound, a mic, a battery, a radio, and a chassis, which would have all the antennas.
Particle physicist here. Light has the tricky property that it travels at the speed of light so I can't imagine it working but perhaps I'm envisioning your concept differently than you are. For carbon nanotubes, you'll need a materials engineer or a condensed matter physicist.
Materials/Semiconductor Physics Engineer here. The problem is not what we CAN do, the problem is what is economically possible at scale. For example, FinFETs were demonstrated at the turn of the century, but took all of those years to become (1) necessary - planar transistor were getting too leaky, and (2) possible to fabricate economically in large scales.
Researchers have created smaller, faster transistors years ago, but it takes a lot of time and effort to develop the EUV or quadruple patterning technologies that enable these devices to be reliably and affordably manufactured.
So I think the problem in moving "beyond silicon" is not that we don't have alternatives, it is that we have many alternatives, we just don't know which will scale. It becomes less of a purely engineering problem and manufacturing business problem. When new technologies relied purely on the established silicon industry alone, you could reasonably extrapolate how much each new technology would cost as the nodes were scaled down. When we talk about using III-V FinFETs/ All Around Gates or graphene and carbon nanotubes, we don't really know how those things will scale with the existing processes as we move them from the laboratory to the manufacturing line.
I've been looking forward to this transition for years. People moan that it is the end of Moores Law, but that could be a good thing. Silicon is a great material for forming logic circuits for many reasons, but it also has many downsides. While silicon never reached 10 GHz (as Intel once predicted), other materials easily blow past 100 GHz transistor switching speeds. When the massive engines that work tirelessly to reduce our lithography nodes nm by nm are aimed at "the next big thing", we might be pleasantly surprised by a whole new paradigm of performance.
So what competes with modern day Si CMOS on speed, power usage, and cost? Nothing... yet!
Yes, it's fascinating stuff. Thanks for reminding me about that. I recall now that I think it was graphene that enabled those insanely high switching speeds, due to its incredible conductivity/efficiency. Hopefully it can now be made economically feasible at some point! Imagine a the next GPU that is 10x smaller and operates at 100x the clock speed. A GTX 1080Ti x 1000! Finally we can do real time true global illumination ha....
Thats a good point. Like, answering a question "are you willing to pay $800 for a new CPU to double the computers speed?" Most consumers say no. It all comes down to the mass market price.
From the birth of the Univac until 10 years ago, consumers consistently said YES! and plunked down their money. Doubling the (per thread) speed of a core2duo is going to cost more than $800. Also the cost of the RAM on servers is *WAY* more than $800, so you can expect if Intel could double the power of each core, they could crank prices up by at least $800 per core on Xeons. They can't, and neither can IBM or AMD.
Sure, but that speed is dependent on the medium. There are some proposed optical transistors using electromagnetically induced transparency. Long way off. However, silicon photonics could change some things. Capacitance is the killer for electronic interconnects, whether chip-to-chip or on-chip bus. An optical interconnect could greatly increase bandwidth without increasing the chip's power dissipation. I think an electronic-photonic hybrid is more likely, since silicon photonics components can be made on a CMOS process. We are already beginning to see optical PCI Express being deployed. I could definitely see a 3D approach where 2D electronic layers are connected through an optical rather than electronic bus.
Yes, transparency, like polarized windows that either become transparent or opaque when a current is applied (to the liquid crystals?). I wonder how small they could be made. It would be incredibly power efficient I would think.
I've just finished decommissioning all my Core 2 Duo parts, several of which have been upgraded with 2nd hand Sandy Bridge components.
Yeah, CPU performance has been relatively stagnant. CPUs have come to where commercial jets are now in their technological development. Jets now fly slower than they did the 1960s, but have much better fuel economy per seat.
Not noted in the E6400 v. i5-6600 comparison is that they both have the same TDP which is pretty impressive. Also, you've got to take inflation into account which would bring the CPU price up to $256 or there about, enough for a i5-6600K.
One could argue that while Core put Intel on top of the heap again, Sandy Bridge was a more important shift in design and as a result, many users went from Conroe to Sandy Bridge and have stayed there.
That pretty much defines my PC currently. Haven't needed to upgrade. Crazy a decade like nothing.
I'm the CPU editor, we've been up to date for every major CPU launch for the last couple of years, sourcing units that Intel haven't sourced other websites and have done comprehensive and extensive reviews of every leading x86 development. We have had every Haswell-K (2), Haswell-E(3) Broadwell (2), Broadwell E3 Xeon (3), Broadwell-E (4) and Skylake-K (2) CPU tested and reviewed on each official day of launch. We have covered Kaveri and Carrizo in deep repeated detail over the last few years as well.
This is an important chip and today marks in an important milestone.
Ian, your reviews are always too notch. You have incredible knowledge, and your understanding of both CPU and memory architecture etc is unparalleled in journalism. Ignore the trolls, this was a fantastic article.
Different editors for different content. Honestly I thought this was a great piece. I think this site is not quite up to what it was back then, just go read the articles for Fermi, or when Bulldozer released and stuff, much more deep dives into the architecture. I realize that Intel and the other manufacturers may not always be willing to release much info, and they seem to release less these days but I don't know -- the site feels different.
Honestly, and I am pretty forgiving, being as late as this site has on the recent GPU reviews is pretty inexcusable. Although, that is obviously nothing to do with Ian, I always like Ian's articles.
I like articles like these. Sometimes certain processors stick around as the baseline in my brain even after a decade (holy hell!). Core 2 Duo is always a reference point for me, so is a 3GHz P4.
Wow. I've assumed that they would at least burn out so that they would need replacement (like my old super celeries). I'm sure you can measure a speed increase between a modern i5 and yours, but it would be hard to notice it.
Nice, thanks Ian. Interesting to look back and then ahead. I still use my E6400 in a media playback machine using the first "good" integrated graphics, the NVidia 9300. Since it runs at stock frequency @ 1V (VID spec is 1.325), it's pretty efficient too.
I think Core2 essentially accelerated the market saturation we are seeing and causing the PC market to decline a bit. My Core2 E8400 still runs Window 10 relatively fine, although I have built two more since because I like being near the cutting edge. However I know quite a few people still using Core2 CPUs for their basic computing needs.
There just haven't been any new apps that are more resource intensive than a word processor or web browser which the entire world needs. So the PC replacement market has stagnated a bit.
Most Core processors are faster than the ho-hum Cherry Trail offerings you find low end PCs. So buying a new cute shiny black little box to replace your beige big box doesn't guarantee much.
It reads a little weird/myopic that only certain technologies are being considered while forecasting all the way out to 2030. For instance, lots of NAND/DRAM discussion but no mention of upcoming or already early-adoption tech like 3D XPoint or memristors, etc. No mention of optoelectronics (like photonic signalling on- and off-chip), no mention of III-V and other 'exotic' materials for chip manufacturing and improved frequency/power scaling (with focus instead devoted to feature sizes/stacking/platter size/defects.) And so on.
I mean, if you're forecasting 5 years ahead, I'd understand. But talking about 15 years into the future but only extrapolating from what's on the market right now -- as opposed to what's in the labs and on drawing boards -- seems to be a little too pessimistic and/or myopic.
The full report mentions III-V and SiGe in the remit of future technologies. Anton and I are starting to discuss what parts we can pull out for individual news stories, to stay tuned.
Heck I still have my Nexgen P110 cpu computer set up and run it once in awhile. From 1996. Remember the VESA local bus video card? Nexgen was later bought by AMD.
I've still got a Dell E1705 laptop that I bought in 2006 which came with a Core Duo, which I upgraded to Core 2 Duo about 4 years into it, and maxed the RAM to 4GB (from the 2GB max it came with). It was decent, but really came alive when I put an SSD into it. I still use this laptop for basic stuff, and even some gaming (WoW and SWToR) with the Geforce Go GPU. It's definitely long in the tooth now, now running Windows 7 (it came with WinXP, but 10 is unsupported on the GPU even though there's a work around). I'm thinking mobile Kaby Lake and mobile Pascal will be the next laptop I keep for another 10 years.
Can you beat me? Last month I finally upgraded my primary rig from a C2D E4300 @2.7Ghz! Memory started failing last year & I couldn't find cheap DDR2, so I was down to 2GB. Went for a i5 6500 and 16GB DDR4. The difference is incredible!
Great article, Ian! I've found it a very good read and it's always nice to take a look back and analyze what we've been through so far. I also wanna point out just a few mini-errors I've found in the article: The Core 2 processors all came from a 143mm2 die, compared TO the 162mm2 of Pentium D. / by comparison to the large die sizes we see IN 2016 for things like the P100 / whereas the popular Core 2 Duo E6400 at $224 WAS at the same price as the Core i5-6600. As we NOW know, on-die IMCs are the big thing. Geometrical Scaling when this could NO longer operate By 2020-25 device features will be REDUCED (?) On the later -> LATTER?
I replaced my C2D a couple of years ago only because it needed yet another mobo and PSU and I do like shiny things, I'd bet if it was still around I could pop in my old 660GTX and run most games just fine at 1080. At work there are some C2Ds still kicking around... and a P4 w XP! Of course a lot of larger businesses have legacy gear & apps but it made me chuckle when I saw the P4.
With the plateau in needed performance on the average desktop there just isn't much reason to upgrade these days other than video card if you are a gamer. Same thing with phones and tablets - why aren't iPads selling? Everyone got one and doesn't see a need to upgrade! My wife has an original iPad and it works just fine for what she uses it for so why spend $600 on a new one?
You are not mentioning FPGA's and non-volatile memory revolution which could very well be coming soon (not just flash, but x-point and other similar stuff).
Personally I see FPGAs as a clear use for all the transistors we might want to give them.
Program stuff, let it run through a compiler-profiler and let it's adaptive cloud trained AI create an optimal "core" for your most performance intensive code. This recipe is then baked together with the executable, which will get programmed near-realtime to the FPGA portion of the SOC you are using. Only to be reprogrammed when you "alt-tab" to another program.
Obviously we'll still need massively parallel "GPU" portion in chip, ASIC-blocks for H265 encode / decode with 8K 120Hz HDR support, encryption / decryption + other similar ASIC usages and 2-6 "XYZlake" CPU's. Rest of the chip will be FPGA with ever more intelligent libraries + compilers + profilers used to determine at software compile time the optimal recipe for the FPGA programming.
Not to mention the paradigm chances that non-volatile fast memory (x-point and future competitors) could bring.
FPGAs are old hat. Granted, it might be nice if they could replace maybe half of their 6T SRAM waste (probably routing definitions, although they might get away with 4T), but certainly the look-up needs to be 6T SRAM. I'd claim that the non-volitile revolution happened in FPGAs (mainly off chip) at least 10 years ago.
But at least they can take advantage of the new processes. So don't count them out.
I'm reading this from my old Sony laptop with a Core 2 Duo and Nvidia GPU in it. With an SSD added in, the basic task performance is virtually indistinguishable from my other computers with much newer and more powerful CPU's. Granted it can get loud when under load, but the Core 2 era was still a ways away from the new mobile focused Intel we have now.
I guess my basic point is that I got this laptop in 2009 and for regular browsing tasks etc, it is still more than adequate which is a testament to both the quality and longevity of the Core 2 family, but where we are with CPU power in general. Good stuff.
I agree. Got me a Sony SZ1m in 2007 (i think?), flip switched the core duo yonah with a core2duo T7200 merom. Because its 64 bit and now i can run 64 bit os and 64 bit software on it.
To some of you it may sound like a surprise, but a Core2Duo desktop can still be fairly usable as a media consumption device running Windows 10. I am friends with a couple who are financially struggling graduate students. The other way, they brought an ancient Gateway PC with LCD from work, and they were wondering if they could rebuild it into a PC for their kid. The specs were 2GB of memory and Pentium E2180 CPU. I found inside of a box of ancient computer parts which I never throw away an old Radeon graphics card and a 802.11n USB adapter. I told them to buy a Core2Duo E4500 processor online because it cost just E4500. After installing Windows 10, the PC runs fairly smoothly. Good enough for web browsing and video streaming. I could even load some older games like Quake 3 and UrbanTerror 4.2 and play them with no glitch.
Wow. Actually, just last holiday season, I replaced my parents' old P4 system (with 512 MB RAM! and 250 GB SATA Maxtor HDD!) with my old Core i7-860 since I upgraded to a system with a Core i7-4790K that I got on a black friday sale. The old 860 could definitely still run well for everyday tasks and even gaming, so it was more than good enough for my parents, but the video processing capabilities of the more recent chips are a lot better, which is the main reason I updated. Also, the single threaded performance was amazing for the 4790K, and the Dolphin emulator did not run so well on my 860, so there was that.
Speaking of Core 2, though, I owned an ASUS UL30Vt with the Core 2 Duo SU7300 CPU and an Nvidia GeForce G 210M. While the weak GPU was not so great for high end gaming, the overall laptop was amazing. It was more than powerful enough for everyday tasks, and had amazing battery life. It was pretty much what every ultrabook today desires to be: sleek, slim, but powerful enough with great battery life. That laptop to me was the highlight of the Core 2 era. I was kind of sad to let it go when I upgraded to a more powerful laptop with an Ivy Bridge CPU and 640M LE GPU. I don't think any laptop I owned gave me as much satisfaction as that old Asus. Good times.
Hmm. We've got an ancient 2007 Macbook with a 2GHz C2D(T7200 I think) in it that's still used for web browsing on a daily basis. Upgrading it to 4GB of ram and a SSD made it surprisingly capable.
It's not all a bed of roses though, as random things will come out of left field and floor it. I think it's mostly flash heavy sites, but Twitter and Vine freak it out a little.
I vividly remember the anticipation and hype leading up to the C2D release. The the years of struggle Intel had with Netburst before Conroe. It was what I consider the end of the golden age of the CPU. Great job Ian!
Ah, it wasn't the Pentium Pro it was based on. The Core family was a direct descendant of the Pentium 3 Tualatin. They stopped sales of the of the Pentium 3 Tualatin because it was outperforming the Pentium 4. They migrated that technology to the notebook line as it was much more efficient. It became the Pentium M. When Intel realized that the Pentium 4 Netburst architecture was a dead end and they needed a new chip to go up against AMD, they sourced their notebook chips to build the Core series. See this is what is called re-writing history. Come on guys, it is very well known that they sourced the Pentium M Yohan for the Core series. I do not know who did your research but it is all wrong. Go back and recheck your information. The Pentium Pro was the granddaddy of all the Pentium 2 and 3 chips so yeah, you can point to that chip in a vague way as the ancestor. But the Pentium 4 can as well well. So to be to the point, the core lines DNA goes back directly to the Pentium 3 Tualatin, So we have all been using very, very hopped up Pentium 3s the last 10 years. The Tualatin was an excellent chip. It overclock like crazy. There were Celeron and Sever P3 versions and all of them beat the hell out of the P4. Its know reason Intel had to kill it. Do more research so you can post accurate articles, please.
These are not the original AT guys, they are all new people and they are not doing the research they should be doing. This is how history can get changed. People look to a reputable tech site that got something wrong and its written in stone. Well AT says this is how it is, even if they are wrong. Go check the history directly from Intel, This article is wrong and that is a fact, period. I felt it just needed to be called out on.
Except that the Pentium Pro was the first chip with the P6 architecture. Pentium 2 was pretty much pentium pro with MMX, a higher clock rate, and slower [off chip but on slot] L2 cache. Pentium 3 was the same with more clock, more MMX (possibly SSE), and on chip (full speed) L2 cache.
While I'd have to assume they would pull all the files from the Pentium 3 plans, I'd still call it "pentinium pro based" because that was where nearly all the architecture came from (with minor upgrades and bug fixes to the core in 2 and 3).
I'm still curious as to exactly how they pulled it off. My main theory is that they duplicated the block diagram of the P6, and then verified each block was correct (and exactly duplicated the P6 at a higher speed), then used the P6 verification to prove that if the blocks were all correct, they had a correct chip.
Same here. I thought it was the design of the Pentium M (from Israel team) they got the Core from. It was that time that AMD is beating Intel's P4's in performance, efficiency, and price. After a few months, articles were posted with people able to overclock a Pentium M with the characteristics of the AMD CPU and, of course, beating Pentium 4's at much lower clock speeds. From there, the Intel Core was born out of the Pentium M's which is essentially the same only with higher TDP and clock speeds. Then came, the Core Duo, then the Core 2 Duo.
I started college in electrical engineering; moved to software after an ee class using c++. I was very excited and confident in a DIY PC. I knew the Core 2 was on its way. I gathered parts from whatever computers I could scratch together; power supply, case, DVD drive, network card(s), HDDs ... Everything but Mobo, CPU, GPU and RAM - the brains.
I bought an E6400 2.13GHz with a gigabyte mobo, 4GB 800MHz DDR2 and a Radeon x1650 Pro.
I just retired the CPU and Mobo in 2012/13 when I experimented with my current PC; an AMD APU + Ded GPU (dual graphics).
I'm excited to be looking at a future replacement for my PC. We're on the horizon of some interesting changes that I don't even understand (what was his article about? Lol).
I seem to recall from a casual glance at an article (on this site) back some 9 years ago.. That intel basically got lucky, or fluked as it were.. Something to do with what they were doing with the PentiumM which caused them to move away from the P3-4 stuff.. hum.. damned if I can remember though what it was about.
Pentium 3 architecture was having difficulties increasing performance so they replaced it with Pentium 4s Netburst. They had their Israel department continue work on Pentium 3 that turned into the Pentium M.
Firstly macro-op fusion is hardly an x86 exclusive these days. Many (all?) ARMv8 CPUs use it, as do the most recent POWER CPUs. Like the x86 case, it's used to fuse together pairs of instructions that commonly co-occur. Compare and branch is a common example, but other common examples in RISC are instruction pairs that are used to create large constants in a register, or to generate large constant offsets for loads/stores.
Secondly you suggest that the ROB is an expensive data structure. This is misleading. The ROB itself is primarily a FIFO and can easily be grown. The problem is that storing more items in the ROB requires more physical registers and more load/store queue entries, and it is THESE structures that are difficult and expensive to grow. This suggests that using alternative structures for the load/store queues, and alternative mechanisms for scavenging physical registers could allow for much larger ROBs, and in fact Intel has published a lot of work on this (but has so far done apparently nothing with this research, even though the first such publications were late 90s --- I would not be surprised if Apple provides us with a CPU implementing these ideas before Intel does).
It wasn't written about to the exclusion of all other microarchitectures, it was written about focusing on x86 back in 2006. At the time, the ROB was described as expensive by Intel, through I appreciate that might have changed.
10 years to double single core performance, damn. Honestly thought Sandy Bridge was a bigger improvement than that. Only 4 times faster in multi-core too.
Glad to see my 4570S is still basically top of the line. Kinda hard to believe my 3 year old computer is still bleeding edge, but I guess that's how little room for improvement there is now that Moore's law is done.
Guess if Windows 11 brings back normal functionality to the OS and removes "apps" entirely I'll have to upgrade to a DX12 capable card. But I honestly don't think that's gonna happen.
I really have no idea what I'm gonna do OS wise. Like, I'm sure my computers won't hold up forever. But Windows 10 is unusable and Linux doesn't have proper support still.
Computer industry, once a bastion of capitalism and free markets, rife with options and competition is now become truly monastic. Guess I'm just lamenting the old days, but at the same time I am truly wondering how I'll handle my computing needs in 5 years. Windows 10 is totally unacceptable.
Now, just because you're not capable of using it doesn't mean everyone else is incapable. There are a variety of remedial computer courses available, why not have a word with your local college?
4570S isn't basically top of the line. It and the i5 are 65W TDP. The latest 91W i7 is easily 33% faster. Just run the benchmark in CPU-Z to see how you compare.
Linux Mint has been my primary OS since early 2013. I've been tinkering with various distros starting with Slackware in the late 1990s as an alternative to Windows. I'm not entirely sure what you mean my "doesn't have proper support" but I don't encourage people to make a full conversion to leave Windows behind just because the current user interface isn't familiar.
There's a lot more you have to figure out when you switch from Windows to Linux than you'd need to learn if going from say Windows 7 to Windows 10 and the transition isn't easy. My suggestion is to purchase a second hand business class laptop like a Dell Latitude or HP Probook being careful to avoid AMD GPUs in doing so and try out a few different mainstream distros. Don't invest a lot of money into it and be prepared to sift through forums seeking out answers to questions you might have about how to make your daily chores work under a very different OS.
Even now, I still keep Windows around for certain games I'm fond of but don't want to muck around with in Wine to make work. Steam's Linux-friendly list had gotten a lot longer in the past couple of years thanks to Valve pushing Linux for the Steam Box and I think by the time Windows 7 is no longer supported by Microsoft, I'll be perfectly happy leaving Windows completely behind.
That said, 10 is a good OS at its core. The UI doesn't appeal to everyone and it most certainly is collecting and sending a lot of data about what you do back to Microsoft, but it does work well enough if your computing needs are in line with the average home user (web browsing, video streaming, gaming...those modest sorts of things). Linux can and does all those things, but differently using programs that are unfamiliar...oh and GIMP sucks compared to Photoshop. Just about every time I need to edit an image in Linux, I get this urge to succumb to the Get Windows 10 nagware and let Microsoft go full Big Brother on my computing....then I come to my senses.
GIMP is not the only, ahem, "windows ecosystem alternative" that is a total piece of crap on loonixes. Anything outside of the browser window sucks, which tends to happen when your code maintainers are all dotheads and/or 14 years old.
I finally relegated my E6400-based system from its role as my primary computer and bought a new one (6700K, 950 Pro SSD, 32 GB RAM) a couple of weeks ago.
While the new one is certainly faster at certain tasks the biggest advantage for me is significantly lower power consumption (30W idle, 90W under load versus 90W idle and 160-180W under load for the old one) and consequently less noise and less heat generation.
Core2 has aged well for me, especially after I added a Samsung 830 to the system.
NVMe may not be all it's cracked up to be. It, for the most part, limits you to booting windows 8 and higher, and good luck with the free upgrade to windows 10 (which supposedly ends tomorrow).
Yeah a lot of those assumptions and guestimates for the future seem either overly optimistic or seem to ignore realities. I realize board power doesn't equate to average power use, but you are still talking about max power consumption that would drain a current cell phone battery dead in less than an hour, even on some of the biggest phone batteries.
Beyond that is the heat dissipation, that phone is going to get mighty hot trying to dissipate 8+ watts out of even a large phone chassis.
As pointed out, 32 cores seems a wee excessive. A lot of it seems to be "if we take it to the logical extreme" as opposed to "what we think is likely".
Take a 45nm C2Q Q9650 ($50 eBay), overclock to 4.0GHz, and you will be as fast as AMD's FX-9590 that's running at 220W. Older motherboard and DDR2 will be harder to come by but it is sad how AMD never managed to catch up to Core 2 after all these years. E6400 was my first Intel after switching to AMD after the original Pentium and I have never look back at AMD again.
I have made an upgrade from C2D 6550 to Q9650 in my old DELL Optiplex 755 MT. Plus 4x 2GB DDR2 800 MHz, Intel 535 SSD 240 GB, Sapphire Radeon HD7750 1GB DDR5, Sound Blaster X-FI, and USB 3.0 PCI-E card. Running Windows 7 Professional. 3-times more power then original DELL configuration :-)
I just logged in to tell you that I'm reading this article on my desktop PC which has a Intel Core 2 Duo E4300 processor (1,8 GHz, 200 MHz FSB) with 4 GB of RAM (started with 2). When I wanted (or needed) I overclocked this processor to 3 GHz (333 MHz FSB). My PC will have its 10 years anniversary this December. During the years I upgraded the video card (for 1080p h264 hardware decoding and games when I still played them) and added more hard drives. The PC has enough performance for what I’m using it right now – so I would say that this is a good processor.
I bought a C2D E6300 the week it came out, my first Intel CPU since 2000. My previous CPUs had been an AMD Athlon 64 and an AMD Athlon Thunderbird.
That E6300 remains my all-time favourite CPU. It's still running in a friend of mine's PC (@ 2.77Ghz, which I overclocked it to soon after getting it). It was just *so* fast compared to my old PC. Everything just instantly got faster (and I hadn't even upgraded my GPU!).
My E6300 is still running fine in a PC I have donated to a friend. It was set to 3GHz within a few days from purchase and never moved from that speed. Once or twice I changed the CPU fan as it was getting noisy.
These PCs are still perfectly fine if you install an SSD. I did it recently on an Acer Aspire t671 desktop. After modding the bios to enable AHCI I put an 850 evo (runs at sata 2 speed) and a pretty basic Nvidia GFX card. The system turned super fast and runs Windows 10 perfectly fine. You don't need faster processors all you need is get rid of the HDDs.
I'm still running AMD Athlon x2 4850 2.5GHz as a file server + MythTV box. It supports ECC, is stable and has enough grunt to do its job so why replace. Yes, I could get bit energy efficiency but in my climate >50% of time heating is needed and new hardware has its risks of compatibility issues etc.
+10 for anandtech again, article was great as always!
I'm posting this on a Macbook with an E6600 2.4 GHz part. It's still rockin' after six years of constantly being tossed into a backpack. The comparisons between C2D and the latest i5 CPUs don't show how good these old CPUs really are - they're slow for hard number crunching and video encoding but they're plenty fast for typical workday tasks like Web browsing and even running server VMs. With a fast SSD and lots of RAM, processor performance ends up being less important.
That's too bad for Intel and computer manufacturers because people see no need to upgrade. A 50% performance boost may look like a lot on synthetic benchmarks but it's meaningless in the real world.
"With a fast SSD and lots of RAM, processor performance ends up being less important."
I remember back when I could take on Icecrown raids in WoW with my T7200-based Macbook. And I actually just stopped using my T7500-based Macbook a few months ago. For a couple years I thought about seeing if an SSD would perk it back up, but decided the memory bandwidth and size limitation, and graphics, was just not worth the effort. Funny that you're not impressed by i5s; I use a laptop with an i5-6200U, now. (Some good deals with those right now, especially if you can put up with the integrated graphics instead of a discrete GPU.) But then, my Macbooks were about 3 years older than yours :)
Replaced three Q6600 on P45 systems with socket converted Xeon X5492 at $60 off eBay each. Got 3.4GHz Quads now never using more than 60 Watts under Prime95 (150 Watts "official" TDP), with 7870/7950 Radeon or GTX 780 running all modern games at 1080p at high or ultra. Doom with Vulkan is quite fun at Ultra. Got my kids happy and bought myself a 980 ti off the savings. If you can live with 8GB (DDR2) or 16GB (DDR3), it's really hard to justify an upgrade from this 10 year old stuff.
The laugher is that he (used to) work for Intel, and 6 months after I gave it to him in lieu of some owed cash, he bought a 4790K through the employee program (which isn't nearly as good as you'd think) and built a new system with it.
The Q6600 works so well he's never gotten around to migrating to the new box - so the 4790k is still sitting unused! I'm thinking of buying it off him. I do 3D rendering and can use the extra render node.
Thats a good point. Like, answering a question "are you willing to pay $800 for a new CPU to double the computers speed?" Most consumers say no. It all comes down to the mass market price.
Look up what Amazon (and anybody else buying a server) pays for the rest of the computer and tell me they won't pay $800 (per core) to double the computer's speed. It isn't a question of cost, Intel just can't do it (and nobody else can make a computer as fast as Intel, although IBM seems to be getting close, and AMD might get back in the "almost as good for cheap" game).
The Core 2 architecture has served me well. Just last year I replaced my server at home which was based on a Core 2 Duo E6600 on a 965 chipset based motherboard. The only reason for the upgrade is that the CPU was having a difficult time handling transcoding jobs to several Plex clients at once.
The desktop PC my kids use is Core 2 based, though slightly newer. Its a Core 2 Quad Q9400 based machine. It is the family "gaming" PC if you dare call it that. With a GT 730 in it, it runs the older games my kids play very well and Windows 10 hums along just fine.
I'm still using a Core 2 Duo E8600 in my desktop. In an Abit P-35 Pro motherboard. The damn thing just works too well to get rid of, and I love the Abit board.
My 10 year old E6600 with EVGA board & EVGA/NVIDIA 295 video card is also a great space heater. CUDA on card extended utility of set up. Only limitation is no CPU video decoding limits streaming to 1440. Waiting for the Intel Kaby Lake or better on die Intel GPU to be able to handle 4K @ 60fps over HDMI not USB3(+).
The Core 2 architecture was developed in Israel by a Intel team working on mobile processors. Intel suddenly realized that they had a terrific chip on their hands and ran with it. The rest is history. http://www.seattletimes.com/business/how-israel-sa...
So a 10 year old chip is about half the performance of today's price equivalent. I'd have hoped today's tech to be more like 10 times better instead of just 2.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
158 Comments
Back to Article
Dobson123 - Wednesday, July 27, 2016 - link
I'm getting old.3ogdy - Wednesday, July 27, 2016 - link
That's what I thought about when I read "TEN year anniversary". It certainly doesn't feel like it was yesterday...but it certainly feels as old as "last month" is in my mind and that's mostly thanks to i7s, FXs, IPS, SSDs and some other things that proved to be more or less of a landmark in tech history.close - Thursday, July 28, 2016 - link
I just realized I have an old HP desktop with a C2D E6400 that will turn 10 in a few months and it's still humming along nicely every day. It ran XP until this May when I switched it to Win10 (and a brand new SSD). The kind of performance it offers in day to day work even to this day amazes me and sometimes it even makes me wonder why people with very basic workloads would buy more expensive stuff than this.junky77 - Thursday, July 28, 2016 - link
marketing, misinformation, lies and the need to feel secure and have something "better"Solandri - Friday, July 29, 2016 - link
How do you think those of us old enough to remember the 6800 and 8088 feel?JimmiG - Sunday, July 31, 2016 - link
Well my first computer had a 6510 running at 1 MHz.Funnily enough, I never owned a Core 2 CPU. I had an AM2+ motherboard and I went the route of the Athlon X2, Phenom and then Phenom II before finally switching to Intel with a Haswell i7.
Core 2 really changed the CPU landscape. For the first time in several years, Intel firmly beat AMD in efficiency and raw performance, something AMD has still not recovered from.
oynaz - Friday, August 19, 2016 - link
We miss or C64s and AmigasArtShapiro - Tuesday, August 23, 2016 - link
What about those of us who encountered vacuum tube computers?AndrewJacksonZA - Wednesday, July 27, 2016 - link
I'm still using my E6750... :-)just4U - Thursday, July 28, 2016 - link
I just retired my dads E6750. It was actually still trucking along in a Asus Nvidia board that I had figured would be dodgy because the huge aluminum heatsink on the chipset was just nasty.. Made the whole system a heatscore. Damned if that thing didn't last right into 2016. Surprised the hell out of me.patel21 - Thursday, July 28, 2016 - link
Me Q6600 ;-)nathanddrews - Thursday, July 28, 2016 - link
Me too! Great chip!Notmyusualid - Thursday, July 28, 2016 - link
Had my G0 stepping just as soon as it dropped.Coming from a high freq Netburst, I was thrown back, by the difference.
Since then I've bought Xtreme version processors... Until now, its been money well spent.
KLC - Thursday, July 28, 2016 - link
Me too.rarson - Thursday, August 4, 2016 - link
I built my current PC back in 2007 using a Pentium Dual Core E2160 (the $65 bang for the buck king), which easily overclocked to 3 GHz, in an Abit IP35 Pro. Several years ago I replaced the Pentium with a C2D E8600. I'm still using it today. (I had the Q9550 in there for a while, but the Abit board was extremely finnicky with it and I found that the E8600 was a much better overclocker.)paffinity - Wednesday, July 27, 2016 - link
Merom architecture was good architecture.CajunArson - Wednesday, July 27, 2016 - link
To quote Gross Pointe Blank: Ten years man!! TEN YEARS!guidryp - Wednesday, July 27, 2016 - link
Too bad you didn't test something with a bit more clock speed.So you have ~2GHz vs ~4GHz and it's half as fast on single threaded...
Ranger1065 - Wednesday, July 27, 2016 - link
I owned the E6600 and my Q6600 system from around 2008 is still running. Thanks for an interesting and nostalgic read :)Beany2013 - Wednesday, July 27, 2016 - link
Built a Q6600 rig for a mate just as they were going EOL and were getting cheap. It's still trucking, although I suspect the memory bus is getting flaky. Time for a rebuild, methinks.And a monster NAS to store the likely hundreds of thousands of photos she's processed on it and which are stuck around on multiple USB HDDs in her basement.
It's not just CPUs that have moved on - who'd have thought ten years ago that a *good* four bay NAS that can do virtualisation would be a thing you could get for under £350/$500 (QNAP TS451) without disks? Hell, you could barely build even a budget desktop machine (just the tower, no monitor etc) for that back then.
God I feel old.
saratoga4 - Wednesday, July 27, 2016 - link
>As we can see, by 2007 it was predicted that we would be on 10nm chipsShould be 100 nm (0.1 microns).
Jehab - Wednesday, July 27, 2016 - link
Yeah, that is a massive error, lol.hammer256 - Wednesday, July 27, 2016 - link
If I remember correctly, intel was running at 65nm in 2007 right? So I guess that was ahead of the curve at the time.JlHADJOE - Saturday, July 30, 2016 - link
And the 2001 ITR roadmap actually predicted 22nm for 2016. Despite the delays getting to 14/16nm the industry is actually way ahead of the curve.http://www2.lbl.gov/Science-Articles/Archive/ALS-E...
melgross - Wednesday, July 27, 2016 - link
Exactly! I was going to post that myself. Once it's understood that it's actually 100nm, the other numbers make sense, otherwise, they don't.Walkermoon - Wednesday, July 27, 2016 - link
Just signed up to say the same.Ian Cutress - Wednesday, July 27, 2016 - link
Derp, I misread the table in a rush. Updated.Pissedoffyouth - Wednesday, July 27, 2016 - link
Could you bench it against an AMD A10 Kaveri? That would be goodGc - Saturday, July 30, 2016 - link
AMD A10-7800 (Kaveri) is in three of the bar charts on page 6. It appears to benefit from 4 cores in two of the comparisons.Zaxx420 - Wednesday, July 27, 2016 - link
Still have a E8400 rig that I use every day...with it o/ced to 4GHz, 8gb of DDR2-1066 and a OCZ Vertex 2 SSD plus it's 6mb of cache on a P45 mobo...it can hold its own to this day...easily. The E8000 series is one of the best 'future proof' cpus ever...next up imo will prove to be Sandy Bridge. Have a 2500K at 4.5GHz on a Z68 mobo, 16gb DDR3-2400 and a Samsung 850 Pro ssd...and now a GTX 1060...plays any game I want at 1080 and max quality...easily.e1jones - Wednesday, July 27, 2016 - link
My E8400 is still my daily driver, 4x 2gb and an SSD swapped in later as the boot drive. Still runs great, except it tends to get bogged down by the TrustedInstaller and the Firefox memory leaks.rarson - Friday, August 5, 2016 - link
I've got an E8600 in an Abit IP35 Pro motherboard. I was having a hard time finding DDR2-1066 last I looked, so I settled for 800. With an SSD and 7870, it's surprising how well it still games. I don't think I'll upgrade the GPU again just due to the fact that I'm limited to PCI-e 2.FourEyedGeek - Monday, August 8, 2016 - link
You could get a higher end GPU and still benefit from increased performance, then get a new CPU motherboard combo when you want too.BrokenCrayons - Wednesday, July 27, 2016 - link
I just upgraded out of a Q6600 and 4GB DDR2 about 2 months ago and I admit that I was still kicking around the idea of leaving it alone as I was pulling the motherboard out of the case. I replaced it with a cheap AMD 860k and 16GB DDR3 which really hasn't done a lot to improve the system's performance. In retrospect, I think I could realistically have squeezed another couple of years out of it, but the motherboard's NIC was iffy and I really wanted reliable ethernet.As for laptops, I've got a couple C2Ds kicking around that are perfectly adequate (T2310 & P8400) for daily use. I really can't see any point in replacing them just yet. Core was a good design through all its iterations.
Beany2013 - Wednesday, July 27, 2016 - link
I like your style - rather than drop $100 on a midlevel intel NIC, you replace an entire platform.I strongly approve of these economics :-)
Michael Bay - Thursday, July 28, 2016 - link
USB3 is kind of nice.BrokenCrayons - Thursday, July 28, 2016 - link
Well the NIC wasn't the only reason, but it was the last in a series of others that I was already coping with that tipped the scales. The upgrade was under $200 for the board, processor and memory so it really boiled down to one weekend dinner out to a mid-range restaurant. It was worth it for more reliable Steam streaming and fewer VNC disconnects as that wired ethernet port is the only means by which I regularly interact with my desktop since it has no monitor and is crammed into a corner in my utility room.artk2219 - Friday, July 29, 2016 - link
Why didn't you go for an FX if you dont mind me asking? You liked the FM2+ platform a bit better?BrokenCrayons - Friday, July 29, 2016 - link
Actually, I didn't give much of anything in the system a very close look before buying. I admittedly did about twenty minutes of research to make sure the 860k and the bottom feeder motherboard I'd picked would play nicely together before making a purchase. So the CPU & motherboard pair were the result of laziness and apathy rather than a preference for FM2+.artk2219 - Monday, August 1, 2016 - link
Ah ok gotcha, I just wanted to share that if you had a microcenter near you they sell FX 8320E's bundled with motherboards for 125 to 170 depending on which board you want to use. That can be quite the steal and a great base for a new cheap system once you bump the clocks on the 8320E.Jon Tseng - Wednesday, July 27, 2016 - link
Great chip. Only just upgraded from my QX6850 last month. Paired with a GTX 970 it was doing just fine running all new games maxed out at 1080p. Amazing for something nearly a decade old!!Negative Decibel - Wednesday, July 27, 2016 - link
my E6600 is still kicking.tarqsharq - Wednesday, July 27, 2016 - link
My dad still uses my old E8400 for his main PC. He's getting my old i7-875k soon though.jjj - Wednesday, July 27, 2016 - link
You can't do DRAM in glasses, not in a real way. Since that's what mobile is by 2025.On-package DRAM is next year or soon not 2025.
You can't have big cores either and you need ridiculous GPUs and extreme efficiency. Parallelism and accelerators, that's where computing needs to go, from mobile to server.
We need 10-20 mm3 chips not 100cm2 boards. New NV memories not DRAM and so on.
Will be interesting to see who goes 3D first with logic on logic and then who goes 3D first as the default in the most advanced process.
At the end of the day, even if the shrinking doesn't stop, 2D just can't offer enough for the next form factor. Much higher efficiency is needed and the size of a planar chip would be far too big to fit in the device while the costs would be mad.Much more is needed. For robots too.The costs and efficiency need to scale and with planar it's at best little.
wumpus - Thursday, August 4, 2016 - link
On package DRAM seems to be a "forever coming" tech. AMD Fury-X basically shipped it, and it went nowhere. I'm guessing it will be used whenever Intel or IBM feel it can be used for serious advantage on some high-core server chip, or possibly when Intel want to build a high-speed DRAM cache (with high-speed-bus) and use 3dXpoint for "main memory".The slow rollout is shocking. I'm guessing nvidia eventually gave up with it and went with tiling (see the Kanter demo on left, but ignore the thread: nothing but fanboys beating their chests).
willis936 - Wednesday, July 27, 2016 - link
I'm certainly no silicon R&D expert but I'm very skeptical of those projections.Mr.Goodcat - Wednesday, July 27, 2016 - link
Typo:"On the later, we get the prediction that 450nm wafers should be in play at around 2021 for DRAM"
450nm wafers would be truly interesting ;-)
wumpus - Thursday, August 4, 2016 - link
I like the rapidly falling static safety. Don't breathe on a 2030 chip.faizoff - Wednesday, July 27, 2016 - link
My first Core 2 Duo was an E4400 that I bought in 2007 I believe, thing lasted me up to 2011 when I upgraded to an i5 2500k. I should've kept that C2D just for nostalgia's sake, I used it intermittently as a plex server and that thing worked great on FreeNAS. The only issue was it was really noisy and would get hot.Notmyusualid - Thursday, July 28, 2016 - link
I've got a few old servers kicking around, all with valid Win server licenses, but due to UK electricity costs, just can't bring myself to have them running at home 24/7 just to serve a backup, or yet another Breaking Bad viewing session... :) which we can do locally now.Akrovah - Wednesday, July 27, 2016 - link
My old E6700 is still alive and kicking. I only just replaced it as my primary system when Devil's Canyon came along. Still use it for my four year old's "first computer."djayjp - Wednesday, July 27, 2016 - link
Not a particle physicist, nor electrical engineer, so just some pie in the sky wondering here, but wouldn't it be possible to build transistors using carbon nanotubes, or light itself (using nano sized mirrors/interferometers, like DLP) or even basing the transistor gates off of protons/sub atomic particles?michael2k - Wednesday, July 27, 2016 - link
I think a more interesting question is using glass as a substrate. Imagine printing nand, CPU, GPU, ram, and along the bezels of a smartphone.That reduces a phone to six components: a display, a transducer for sound, a mic, a battery, a radio, and a chassis, which would have all the antennas.
joex4444 - Wednesday, July 27, 2016 - link
Particle physicist here. Light has the tricky property that it travels at the speed of light so I can't imagine it working but perhaps I'm envisioning your concept differently than you are. For carbon nanotubes, you'll need a materials engineer or a condensed matter physicist.3DoubleD - Wednesday, July 27, 2016 - link
Materials/Semiconductor Physics Engineer here. The problem is not what we CAN do, the problem is what is economically possible at scale. For example, FinFETs were demonstrated at the turn of the century, but took all of those years to become (1) necessary - planar transistor were getting too leaky, and (2) possible to fabricate economically in large scales.Researchers have created smaller, faster transistors years ago, but it takes a lot of time and effort to develop the EUV or quadruple patterning technologies that enable these devices to be reliably and affordably manufactured.
So I think the problem in moving "beyond silicon" is not that we don't have alternatives, it is that we have many alternatives, we just don't know which will scale. It becomes less of a purely engineering problem and manufacturing business problem. When new technologies relied purely on the established silicon industry alone, you could reasonably extrapolate how much each new technology would cost as the nodes were scaled down. When we talk about using III-V FinFETs/ All Around Gates or graphene and carbon nanotubes, we don't really know how those things will scale with the existing processes as we move them from the laboratory to the manufacturing line.
I've been looking forward to this transition for years. People moan that it is the end of Moores Law, but that could be a good thing. Silicon is a great material for forming logic circuits for many reasons, but it also has many downsides. While silicon never reached 10 GHz (as Intel once predicted), other materials easily blow past 100 GHz transistor switching speeds. When the massive engines that work tirelessly to reduce our lithography nodes nm by nm are aimed at "the next big thing", we might be pleasantly surprised by a whole new paradigm of performance.
So what competes with modern day Si CMOS on speed, power usage, and cost? Nothing... yet!
djayjp - Thursday, July 28, 2016 - link
Yes, it's fascinating stuff. Thanks for reminding me about that. I recall now that I think it was graphene that enabled those insanely high switching speeds, due to its incredible conductivity/efficiency. Hopefully it can now be made economically feasible at some point! Imagine a the next GPU that is 10x smaller and operates at 100x the clock speed. A GTX 1080Ti x 1000! Finally we can do real time true global illumination ha....jeffry - Monday, August 1, 2016 - link
Thats a good point. Like, answering a question "are you willing to pay $800 for a new CPU to double the computers speed?" Most consumers say no. It all comes down to the mass market price.wumpus - Thursday, August 4, 2016 - link
From the birth of the Univac until 10 years ago, consumers consistently said YES! and plunked down their money. Doubling the (per thread) speed of a core2duo is going to cost more than $800. Also the cost of the RAM on servers is *WAY* more than $800, so you can expect if Intel could double the power of each core, they could crank prices up by at least $800 per core on Xeons. They can't, and neither can IBM or AMD.Jaybus - Thursday, July 28, 2016 - link
Sure, but that speed is dependent on the medium. There are some proposed optical transistors using electromagnetically induced transparency. Long way off. However, silicon photonics could change some things. Capacitance is the killer for electronic interconnects, whether chip-to-chip or on-chip bus. An optical interconnect could greatly increase bandwidth without increasing the chip's power dissipation. I think an electronic-photonic hybrid is more likely, since silicon photonics components can be made on a CMOS process. We are already beginning to see optical PCI Express being deployed. I could definitely see a 3D approach where 2D electronic layers are connected through an optical rather than electronic bus.djayjp - Thursday, July 28, 2016 - link
Yes, transparency, like polarized windows that either become transparent or opaque when a current is applied (to the liquid crystals?). I wonder how small they could be made. It would be incredibly power efficient I would think.bcronce - Wednesday, July 27, 2016 - link
My AMD 2500+XP lasted me until a Nahalem i7 2.66ghz. It was a slight.... upgradeartk2219 - Friday, July 29, 2016 - link
Very minor, im sure you barely noticed :).[email protected] - Wednesday, July 27, 2016 - link
I have a Q6600 in my household and it's still running well.In term on performance, E6400 is about the same as the CPUs (e.g. z3735f/z3745f) used in nearly all cloudbook these days.
Michael Bay - Thursday, July 28, 2016 - link
Yep, I was surprised at that when looking through the benchmarks. Turns out Atom is not so slow after all.stardude82 - Wednesday, July 27, 2016 - link
I've just finished decommissioning all my Core 2 Duo parts, several of which have been upgraded with 2nd hand Sandy Bridge components.Yeah, CPU performance has been relatively stagnant. CPUs have come to where commercial jets are now in their technological development. Jets now fly slower than they did the 1960s, but have much better fuel economy per seat.
Not noted in the E6400 v. i5-6600 comparison is that they both have the same TDP which is pretty impressive. Also, you've got to take inflation into account which would bring the CPU price up to $256 or there about, enough for a i5-6600K.
ScottAD - Wednesday, July 27, 2016 - link
One could argue that while Core put Intel on top of the heap again, Sandy Bridge was a more important shift in design and as a result, many users went from Conroe to Sandy Bridge and have stayed there.That pretty much defines my PC currently. Haven't needed to upgrade. Crazy a decade like nothing.
ianmills - Wednesday, July 27, 2016 - link
When a website has trouble keeping up with current content and instead recycles decades old content.... things that make you go hmm...Ian Cutress - Wednesday, July 27, 2016 - link
I'm the CPU editor, we've been up to date for every major CPU launch for the last couple of years, sourcing units that Intel haven't sourced other websites and have done comprehensive and extensive reviews of every leading x86 development. We have had every Haswell-K (2), Haswell-E(3) Broadwell (2), Broadwell E3 Xeon (3), Broadwell-E (4) and Skylake-K (2) CPU tested and reviewed on each official day of launch. We have covered Kaveri and Carrizo in deep repeated detail over the last few years as well.This is an important chip and today marks in an important milestone.
Hmm...?
smilingcrow - Wednesday, July 27, 2016 - link
Ananand do CPUs very well, can't think of anyone better. Kudos and thanks to you 'guys'."This primarily leaves ARM (who was recently acquired by Softbank)"
They are under offer so not guaranteed to go through and ARM isn't a person. :)
ianmills - Wednesday, July 27, 2016 - link
I agree you do a good job with CPU's. Its some of the other topics that this site has been slowed down in when compared to previous yearsfanofanand - Thursday, July 28, 2016 - link
Ian, your reviews are always too notch. You have incredible knowledge, and your understanding of both CPU and memory architecture etc is unparalleled in journalism. Ignore the trolls, this was a fantastic article.extide - Wednesday, July 27, 2016 - link
Different editors for different content. Honestly I thought this was a great piece. I think this site is not quite up to what it was back then, just go read the articles for Fermi, or when Bulldozer released and stuff, much more deep dives into the architecture. I realize that Intel and the other manufacturers may not always be willing to release much info, and they seem to release less these days but I don't know -- the site feels different.Honestly, and I am pretty forgiving, being as late as this site has on the recent GPU reviews is pretty inexcusable. Although, that is obviously nothing to do with Ian, I always like Ian's articles.
Ian Cutress - Wednesday, July 27, 2016 - link
Thanks! :)fanofanand - Thursday, July 28, 2016 - link
The Pascal review was pretty damn deep, not sure how much farther you expect them to dive. That said, it was very, very late.Michael Bay - Thursday, July 28, 2016 - link
ADHD millennial detected.Notmyusualid - Thursday, July 28, 2016 - link
Hey Rain Cloud!I enjoyed it, as did many others here - try reading the friendly discussion!
tipoo - Wednesday, July 27, 2016 - link
I like articles like these. Sometimes certain processors stick around as the baseline in my brain even after a decade (holy hell!). Core 2 Duo is always a reference point for me, so is a 3GHz P4.rocky12345 - Wednesday, July 27, 2016 - link
Yea I still have a Q6600 Core2Quad running strong in the front rooom OC to 3700Mhz been running like that since day 1.wumpus - Thursday, August 4, 2016 - link
Wow. I've assumed that they would at least burn out so that they would need replacement (like my old super celeries). I'm sure you can measure a speed increase between a modern i5 and yours, but it would be hard to notice it.chekk - Wednesday, July 27, 2016 - link
Nice, thanks Ian. Interesting to look back and then ahead.I still use my E6400 in a media playback machine using the first "good" integrated graphics, the NVidia 9300. Since it runs at stock frequency @ 1V (VID spec is 1.325), it's pretty efficient too.
pixelstuff - Wednesday, July 27, 2016 - link
I think Core2 essentially accelerated the market saturation we are seeing and causing the PC market to decline a bit. My Core2 E8400 still runs Window 10 relatively fine, although I have built two more since because I like being near the cutting edge. However I know quite a few people still using Core2 CPUs for their basic computing needs.There just haven't been any new apps that are more resource intensive than a word processor or web browser which the entire world needs. So the PC replacement market has stagnated a bit.
stardude82 - Wednesday, July 27, 2016 - link
Most Core processors are faster than the ho-hum Cherry Trail offerings you find low end PCs. So buying a new cute shiny black little box to replace your beige big box doesn't guarantee much.boeush - Wednesday, July 27, 2016 - link
It reads a little weird/myopic that only certain technologies are being considered while forecasting all the way out to 2030. For instance, lots of NAND/DRAM discussion but no mention of upcoming or already early-adoption tech like 3D XPoint or memristors, etc. No mention of optoelectronics (like photonic signalling on- and off-chip), no mention of III-V and other 'exotic' materials for chip manufacturing and improved frequency/power scaling (with focus instead devoted to feature sizes/stacking/platter size/defects.) And so on.I mean, if you're forecasting 5 years ahead, I'd understand. But talking about 15 years into the future but only extrapolating from what's on the market right now -- as opposed to what's in the labs and on drawing boards -- seems to be a little too pessimistic and/or myopic.
Ian Cutress - Wednesday, July 27, 2016 - link
The full report mentions III-V and SiGe in the remit of future technologies. Anton and I are starting to discuss what parts we can pull out for individual news stories, to stay tuned.Sam Snead - Wednesday, July 27, 2016 - link
Heck I still have my Nexgen P110 cpu computer set up and run it once in awhile. From 1996. Remember the VESA local bus video card? Nexgen was later bought by AMD.stardude82 - Wednesday, July 27, 2016 - link
Ah, I remember Socket 7...CoreLogicCom - Wednesday, July 27, 2016 - link
I've still got a Dell E1705 laptop that I bought in 2006 which came with a Core Duo, which I upgraded to Core 2 Duo about 4 years into it, and maxed the RAM to 4GB (from the 2GB max it came with). It was decent, but really came alive when I put an SSD into it. I still use this laptop for basic stuff, and even some gaming (WoW and SWToR) with the Geforce Go GPU. It's definitely long in the tooth now, now running Windows 7 (it came with WinXP, but 10 is unsupported on the GPU even though there's a work around). I'm thinking mobile Kaby Lake and mobile Pascal will be the next laptop I keep for another 10 years.Nacho - Wednesday, July 27, 2016 - link
Can you beat me?Last month I finally upgraded my primary rig from a C2D E4300 @2.7Ghz! Memory started failing last year & I couldn't find cheap DDR2, so I was down to 2GB.
Went for a i5 6500 and 16GB DDR4. The difference is incredible!
Filiprino - Wednesday, July 27, 2016 - link
So much time since reading Anand's article on Conroe.3ogdy - Wednesday, July 27, 2016 - link
Great article, Ian! I've found it a very good read and it's always nice to take a look back and analyze what we've been through so far.I also wanna point out just a few mini-errors I've found in the article:
The Core 2 processors all came from a 143mm2 die, compared TO the 162mm2 of Pentium D. /
by comparison to the large die sizes we see IN 2016 for things like the P100 /
whereas the popular Core 2 Duo E6400 at $224 WAS at the same price as the Core i5-6600.
As we NOW know, on-die IMCs are the big thing.
Geometrical Scaling when this could NO longer operate
By 2020-25 device features will be REDUCED (?)
On the later -> LATTER?
Keep up the amazing work!
Icehawk - Wednesday, July 27, 2016 - link
I replaced my C2D a couple of years ago only because it needed yet another mobo and PSU and I do like shiny things, I'd bet if it was still around I could pop in my old 660GTX and run most games just fine at 1080. At work there are some C2Ds still kicking around... and a P4 w XP! Of course a lot of larger businesses have legacy gear & apps but it made me chuckle when I saw the P4.With the plateau in needed performance on the average desktop there just isn't much reason to upgrade these days other than video card if you are a gamer. Same thing with phones and tablets - why aren't iPads selling? Everyone got one and doesn't see a need to upgrade! My wife has an original iPad and it works just fine for what she uses it for so why spend $600 on a new one?
zepi - Wednesday, July 27, 2016 - link
You are not mentioning FPGA's and non-volatile memory revolution which could very well be coming soon (not just flash, but x-point and other similar stuff).Personally I see FPGAs as a clear use for all the transistors we might want to give them.
Program stuff, let it run through a compiler-profiler and let it's adaptive cloud trained AI create an optimal "core" for your most performance intensive code. This recipe is then baked together with the executable, which will get programmed near-realtime to the FPGA portion of the SOC you are using. Only to be reprogrammed when you "alt-tab" to another program.
Obviously we'll still need massively parallel "GPU" portion in chip, ASIC-blocks for H265 encode / decode with 8K 120Hz HDR support, encryption / decryption + other similar ASIC usages and 2-6 "XYZlake" CPU's. Rest of the chip will be FPGA with ever more intelligent libraries + compilers + profilers used to determine at software compile time the optimal recipe for the FPGA programming.
Not to mention the paradigm chances that non-volatile fast memory (x-point and future competitors) could bring.
wumpus - Thursday, August 4, 2016 - link
FPGAs are old hat. Granted, it might be nice if they could replace maybe half of their 6T SRAM waste (probably routing definitions, although they might get away with 4T), but certainly the look-up needs to be 6T SRAM. I'd claim that the non-volitile revolution happened in FPGAs (mainly off chip) at least 10 years ago.But at least they can take advantage of the new processes. So don't count them out.
lakerssuperman - Wednesday, July 27, 2016 - link
I'm reading this from my old Sony laptop with a Core 2 Duo and Nvidia GPU in it. With an SSD added in, the basic task performance is virtually indistinguishable from my other computers with much newer and more powerful CPU's. Granted it can get loud when under load, but the Core 2 era was still a ways away from the new mobile focused Intel we have now.I guess my basic point is that I got this laptop in 2009 and for regular browsing tasks etc, it is still more than adequate which is a testament to both the quality and longevity of the Core 2 family, but where we are with CPU power in general. Good stuff.
jeffry - Monday, August 1, 2016 - link
I agree. Got me a Sony SZ1m in 2007 (i think?), flip switched the core duo yonah with a core2duo T7200 merom. Because its 64 bit and now i can run 64 bit os and 64 bit software on it.boozed - Wednesday, July 27, 2016 - link
Funny to think that despite four process shrinks, there's been minimal power and clock improvement since then.UtilityMax - Wednesday, July 27, 2016 - link
To some of you it may sound like a surprise, but a Core2Duo desktop can still be fairly usable as a media consumption device running Windows 10. I am friends with a couple who are financially struggling graduate students. The other way, they brought an ancient Gateway PC with LCD from work, and they were wondering if they could rebuild it into a PC for their kid. The specs were 2GB of memory and Pentium E2180 CPU. I found inside of a box of ancient computer parts which I never throw away an old Radeon graphics card and a 802.11n USB adapter. I told them to buy a Core2Duo E4500 processor online because it cost just E4500. After installing Windows 10, the PC runs fairly smoothly. Good enough for web browsing and video streaming. I could even load some older games like Quake 3 and UrbanTerror 4.2 and play them with no glitch.UtilityMax - Wednesday, July 27, 2016 - link
I mean, the E4500 cost just 5 bucks..DonMiguel85 - Wednesday, July 27, 2016 - link
Still using a Core 2 Quad 9550. It bottlenecks most modern games with my GTX 960, but can still run DOOM at 60FPS.metayoshi - Wednesday, July 27, 2016 - link
Wow. Actually, just last holiday season, I replaced my parents' old P4 system (with 512 MB RAM! and 250 GB SATA Maxtor HDD!) with my old Core i7-860 since I upgraded to a system with a Core i7-4790K that I got on a black friday sale. The old 860 could definitely still run well for everyday tasks and even gaming, so it was more than good enough for my parents, but the video processing capabilities of the more recent chips are a lot better, which is the main reason I updated. Also, the single threaded performance was amazing for the 4790K, and the Dolphin emulator did not run so well on my 860, so there was that.Speaking of Core 2, though, I owned an ASUS UL30Vt with the Core 2 Duo SU7300 CPU and an Nvidia GeForce G 210M. While the weak GPU was not so great for high end gaming, the overall laptop was amazing. It was more than powerful enough for everyday tasks, and had amazing battery life. It was pretty much what every ultrabook today desires to be: sleek, slim, but powerful enough with great battery life. That laptop to me was the highlight of the Core 2 era. I was kind of sad to let it go when I upgraded to a more powerful laptop with an Ivy Bridge CPU and 640M LE GPU. I don't think any laptop I owned gave me as much satisfaction as that old Asus. Good times.
Hazly79 - Wednesday, July 27, 2016 - link
History of Intel processori 386 1986
i 486 1989 - 94
Pentium / MMX 1994 - 96
Pentium II 1997
Pentium III 1999
Pentium 4 / 4 HT 2002 - 04
Pentium D 2005
Core [ 2 ] Duo 2006 - 08
Core [ 2 ] Quad 2007 - 08
Core [ i ] Nehalem 2008
Core [ i ] Westmere 2009
Core [ i ] Sandy Bridge 2011
Core [ i ] Ivy Bridge 2012
Core [ i ] Haswell 2013
Core [ i ] Broadwell 2014
Core [ i ] Skylake 2015
Core [ i ] Kabylake 2016
AnnonymousCoward - Wednesday, July 27, 2016 - link
So how many decades will it take til cpus have significantly faster single-thread than a [email protected]?Notmyusualid - Thursday, July 28, 2016 - link
+1Mr Perfect - Wednesday, July 27, 2016 - link
Hmm. We've got an ancient 2007 Macbook with a 2GHz C2D(T7200 I think) in it that's still used for web browsing on a daily basis. Upgrading it to 4GB of ram and a SSD made it surprisingly capable.It's not all a bed of roses though, as random things will come out of left field and floor it. I think it's mostly flash heavy sites, but Twitter and Vine freak it out a little.
Hulk - Wednesday, July 27, 2016 - link
I vividly remember the anticipation and hype leading up to the C2D release. The the years of struggle Intel had with Netburst before Conroe. It was what I consider the end of the golden age of the CPU. Great job Ian!Impulseman45 - Wednesday, July 27, 2016 - link
Ah, it wasn't the Pentium Pro it was based on. The Core family was a direct descendant of the Pentium 3 Tualatin. They stopped sales of the of the Pentium 3 Tualatin because it was outperforming the Pentium 4. They migrated that technology to the notebook line as it was much more efficient. It became the Pentium M. When Intel realized that the Pentium 4 Netburst architecture was a dead end and they needed a new chip to go up against AMD, they sourced their notebook chips to build the Core series. See this is what is called re-writing history. Come on guys, it is very well known that they sourced the Pentium M Yohan for the Core series. I do not know who did your research but it is all wrong. Go back and recheck your information. The Pentium Pro was the granddaddy of all the Pentium 2 and 3 chips so yeah, you can point to that chip in a vague way as the ancestor. But the Pentium 4 can as well well. So to be to the point, the core lines DNA goes back directly to the Pentium 3 Tualatin, So we have all been using very, very hopped up Pentium 3s the last 10 years. The Tualatin was an excellent chip. It overclock like crazy. There were Celeron and Sever P3 versions and all of them beat the hell out of the P4. Its know reason Intel had to kill it. Do more research so you can post accurate articles, please.Michael Bay - Thursday, July 28, 2016 - link
>teaching AT about CPU architecturesDamn it, you`re a riot!
Impulseman45 - Thursday, July 28, 2016 - link
These are not the original AT guys, they are all new people and they are not doing the research they should be doing. This is how history can get changed. People look to a reputable tech site that got something wrong and its written in stone. Well AT says this is how it is, even if they are wrong. Go check the history directly from Intel, This article is wrong and that is a fact, period. I felt it just needed to be called out on.natenu - Monday, August 1, 2016 - link
Refreshing to see this comment. HT was a marketing joke to keep up with clock rate shoppers.wumpus - Tuesday, August 2, 2016 - link
When Dave Barry jokes about "speed is measured in Megahertz" you know you are ripe for some marketing in your engineering.Ian Cutress - Tuesday, August 2, 2016 - link
To clarify, there was a typo in Johan's original review of the microarchitecture, specifically stating:'However, Core is clearly a descendant of the Pentium Pro,'
I've updated the article to reflect this, and was under the assumption that my source was correct at the point of doing my research.
wumpus - Tuesday, August 2, 2016 - link
Except that the Pentium Pro was the first chip with the P6 architecture. Pentium 2 was pretty much pentium pro with MMX, a higher clock rate, and slower [off chip but on slot] L2 cache. Pentium 3 was the same with more clock, more MMX (possibly SSE), and on chip (full speed) L2 cache.While I'd have to assume they would pull all the files from the Pentium 3 plans, I'd still call it "pentinium pro based" because that was where nearly all the architecture came from (with minor upgrades and bug fixes to the core in 2 and 3).
I'm still curious as to exactly how they pulled it off. My main theory is that they duplicated the block diagram of the P6, and then verified each block was correct (and exactly duplicated the P6 at a higher speed), then used the P6 verification to prove that if the blocks were all correct, they had a correct chip.
zodiacfml - Thursday, July 28, 2016 - link
Same here. I thought it was the design of the Pentium M (from Israel team) they got the Core from. It was that time that AMD is beating Intel's P4's in performance, efficiency, and price. After a few months, articles were posted with people able to overclock a Pentium M with the characteristics of the AMD CPU and, of course, beating Pentium 4's at much lower clock speeds. From there, the Intel Core was born out of the Pentium M's which is essentially the same only with higher TDP and clock speeds. Then came, the Core Duo, then the Core 2 Duo.I just can't remember where I read it though.
marty1980 - Wednesday, July 27, 2016 - link
I started college in electrical engineering; moved to software after an ee class using c++. I was very excited and confident in a DIY PC. I knew the Core 2 was on its way. I gathered parts from whatever computers I could scratch together; power supply, case, DVD drive, network card(s), HDDs ... Everything but Mobo, CPU, GPU and RAM - the brains.I bought an E6400 2.13GHz with a gigabyte mobo, 4GB 800MHz DDR2 and a Radeon x1650 Pro.
I just retired the CPU and Mobo in 2012/13 when I experimented with my current PC; an AMD APU + Ded GPU (dual graphics).
I'm excited to be looking at a future replacement for my PC. We're on the horizon of some interesting changes that I don't even understand (what was his article about? Lol).
just4U - Thursday, July 28, 2016 - link
I seem to recall from a casual glance at an article (on this site) back some 9 years ago.. That intel basically got lucky, or fluked as it were.. Something to do with what they were doing with the PentiumM which caused them to move away from the P3-4 stuff.. hum.. damned if I can remember though what it was about.FourEyedGeek - Tuesday, August 9, 2016 - link
Pentium 3 architecture was having difficulties increasing performance so they replaced it with Pentium 4s Netburst. They had their Israel department continue work on Pentium 3 that turned into the Pentium M.Hazly79 - Thursday, July 28, 2016 - link
surprised that my 2005-Pentium D 3ghz still can run Diablo 3 (2012) at minimum setting pair with Nvidia GT 710 ($35 card )Really great optimization from Blizzard ent. team...
AnnonymousCoward - Thursday, July 28, 2016 - link
Yeah, but too bad the game sucks. Jay doubled it.name99 - Thursday, July 28, 2016 - link
Two points:Firstly macro-op fusion is hardly an x86 exclusive these days. Many (all?) ARMv8 CPUs use it, as do the most recent POWER CPUs. Like the x86 case, it's used to fuse together pairs of instructions that commonly co-occur. Compare and branch is a common example, but other common examples in RISC are instruction pairs that are used to create large constants in a register, or to generate large constant offsets for loads/stores.
Secondly you suggest that the ROB is an expensive data structure. This is misleading. The ROB itself is primarily a FIFO and can easily be grown. The problem is that storing more items in the ROB requires more physical registers and more load/store queue entries, and it is THESE structures that are difficult and expensive to grow. This suggests that using alternative structures for the load/store queues, and alternative mechanisms for scavenging physical registers could allow for much larger ROBs, and in fact Intel has published a lot of work on this (but has so far done apparently nothing with this research, even though the first such publications were late 90s --- I would not be surprised if Apple provides us with a CPU implementing these ideas before Intel does).
Ian Cutress - Tuesday, August 2, 2016 - link
It wasn't written about to the exclusion of all other microarchitectures, it was written about focusing on x86 back in 2006. At the time, the ROB was described as expensive by Intel, through I appreciate that might have changed.Hrel - Thursday, July 28, 2016 - link
10 years to double single core performance, damn. Honestly thought Sandy Bridge was a bigger improvement than that. Only 4 times faster in multi-core too.Glad to see my 4570S is still basically top of the line. Kinda hard to believe my 3 year old computer is still bleeding edge, but I guess that's how little room for improvement there is now that Moore's law is done.
Guess if Windows 11 brings back normal functionality to the OS and removes "apps" entirely I'll have to upgrade to a DX12 capable card. But I honestly don't think that's gonna happen.
I really have no idea what I'm gonna do OS wise. Like, I'm sure my computers won't hold up forever. But Windows 10 is unusable and Linux doesn't have proper support still.
Computer industry, once a bastion of capitalism and free markets, rife with options and competition is now become truly monastic. Guess I'm just lamenting the old days, but at the same time I am truly wondering how I'll handle my computing needs in 5 years. Windows 10 is totally unacceptable.
Michael Bay - Thursday, July 28, 2016 - link
I like how desperate you anti-10 shills are getting.More!
Namisecond - Thursday, July 28, 2016 - link
I do not think that word means what you think it means...TormDK - Thursday, July 28, 2016 - link
You are right - there is not going to be a Windows 11, and Microsoft is not moving away from "apps".So you seems stuck between a rock in a hard place if you don't want to go on Linux or a variant, and don't want to remain in the Microsoft ecosystem.
mkaibear - Thursday, July 28, 2016 - link
>Windows 10 is unusableNow, just because you're not capable of using it doesn't mean everyone else is incapable. There are a variety of remedial computer courses available, why not have a word with your local college?
AnnonymousCoward - Thursday, July 28, 2016 - link
4570S isn't basically top of the line. It and the i5 are 65W TDP. The latest 91W i7 is easily 33% faster. Just run the benchmark in CPU-Z to see how you compare.BrokenCrayons - Thursday, July 28, 2016 - link
Linux Mint has been my primary OS since early 2013. I've been tinkering with various distros starting with Slackware in the late 1990s as an alternative to Windows. I'm not entirely sure what you mean my "doesn't have proper support" but I don't encourage people to make a full conversion to leave Windows behind just because the current user interface isn't familiar.There's a lot more you have to figure out when you switch from Windows to Linux than you'd need to learn if going from say Windows 7 to Windows 10 and the transition isn't easy. My suggestion is to purchase a second hand business class laptop like a Dell Latitude or HP Probook being careful to avoid AMD GPUs in doing so and try out a few different mainstream distros. Don't invest a lot of money into it and be prepared to sift through forums seeking out answers to questions you might have about how to make your daily chores work under a very different OS.
Even now, I still keep Windows around for certain games I'm fond of but don't want to muck around with in Wine to make work. Steam's Linux-friendly list had gotten a lot longer in the past couple of years thanks to Valve pushing Linux for the Steam Box and I think by the time Windows 7 is no longer supported by Microsoft, I'll be perfectly happy leaving Windows completely behind.
That said, 10 is a good OS at its core. The UI doesn't appeal to everyone and it most certainly is collecting and sending a lot of data about what you do back to Microsoft, but it does work well enough if your computing needs are in line with the average home user (web browsing, video streaming, gaming...those modest sorts of things). Linux can and does all those things, but differently using programs that are unfamiliar...oh and GIMP sucks compared to Photoshop. Just about every time I need to edit an image in Linux, I get this urge to succumb to the Get Windows 10 nagware and let Microsoft go full Big Brother on my computing....then I come to my senses.
Michael Bay - Thursday, July 28, 2016 - link
GIMP is not the only, ahem, "windows ecosystem alternative" that is a total piece of crap on loonixes. Anything outside of the browser window sucks, which tends to happen when your code maintainers are all dotheads and/or 14 years old.Arnulf - Thursday, July 28, 2016 - link
I finally relegated my E6400-based system from its role as my primary computer and bought a new one (6700K, 950 Pro SSD, 32 GB RAM) a couple of weeks ago.While the new one is certainly faster at certain tasks the biggest advantage for me is significantly lower power consumption (30W idle, 90W under load versus 90W idle and 160-180W under load for the old one) and consequently less noise and less heat generation.
Core2 has aged well for me, especially after I added a Samsung 830 to the system.
Demon-Xanth - Thursday, July 28, 2016 - link
I still run an i5-750, NVMe is pretty much the only reason I want to upgrade at all.Namisecond - Thursday, July 28, 2016 - link
NVMe may not be all it's cracked up to be. It, for the most part, limits you to booting windows 8 and higher, and good luck with the free upgrade to windows 10 (which supposedly ends tomorrow).FourEyedGeek - Monday, August 8, 2016 - link
Same CPU here, mine is running at 4Ghz, I can't see a reason other than NVMe to upgrade.dotwayne - Thursday, July 28, 2016 - link
Had a trusty E6300 @ 3.4-5 ghz back then. ahhh...miss those days of oc-ing the shit out of these cheap but super capable silicons.jamyryals - Thursday, July 28, 2016 - link
Neat article, I enjoyed it Ian!azazel1024 - Thursday, July 28, 2016 - link
Yeah a lot of those assumptions and guestimates for the future seem either overly optimistic or seem to ignore realities. I realize board power doesn't equate to average power use, but you are still talking about max power consumption that would drain a current cell phone battery dead in less than an hour, even on some of the biggest phone batteries.Beyond that is the heat dissipation, that phone is going to get mighty hot trying to dissipate 8+ watts out of even a large phone chassis.
As pointed out, 32 cores seems a wee excessive. A lot of it seems to be "if we take it to the logical extreme" as opposed to "what we think is likely".
Peichen - Thursday, July 28, 2016 - link
Take a 45nm C2Q Q9650 ($50 eBay), overclock to 4.0GHz, and you will be as fast as AMD's FX-9590 that's running at 220W. Older motherboard and DDR2 will be harder to come by but it is sad how AMD never managed to catch up to Core 2 after all these years. E6400 was my first Intel after switching to AMD after the original Pentium and I have never look back at AMD again.Panoramix0903 - Thursday, July 28, 2016 - link
I have made an upgrade from C2D 6550 to Q9650 in my old DELL Optiplex 755 MT. Plus 4x 2GB DDR2 800 MHz, Intel 535 SSD 240 GB, Sapphire Radeon HD7750 1GB DDR5, Sound Blaster X-FI, and USB 3.0 PCI-E card. Running Windows 7 Professional. 3-times more power then original DELL configuration :-)JohnRO - Thursday, July 28, 2016 - link
I just logged in to tell you that I'm reading this article on my desktop PC which has a Intel Core 2 Duo E4300 processor (1,8 GHz, 200 MHz FSB) with 4 GB of RAM (started with 2). When I wanted (or needed) I overclocked this processor to 3 GHz (333 MHz FSB).My PC will have its 10 years anniversary this December. During the years I upgraded the video card (for 1080p h264 hardware decoding and games when I still played them) and added more hard drives. The PC has enough performance for what I’m using it right now – so I would say that this is a good processor.
siriq - Thursday, July 28, 2016 - link
I still got my mobile 2600+ barton @2750 mhz , 939 3800+ x2 @2950 mhz . They were awesome!althaz - Thursday, July 28, 2016 - link
I bought a C2D E6300 the week it came out, my first Intel CPU since 2000. My previous CPUs had been an AMD Athlon 64 and an AMD Athlon Thunderbird.That E6300 remains my all-time favourite CPU. It's still running in a friend of mine's PC (@ 2.77Ghz, which I overclocked it to soon after getting it). It was just *so* fast compared to my old PC. Everything just instantly got faster (and I hadn't even upgraded my GPU!).
perone - Friday, July 29, 2016 - link
My E6300 is still running fine in a PC I have donated to a friend.It was set to 3GHz within a few days from purchase and never moved from that speed.
Once or twice I changed the CPU fan as it was getting noisy.
Great CPU and great motherboard the Asus P5B
chrizx74 - Saturday, July 30, 2016 - link
These PCs are still perfectly fine if you install an SSD. I did it recently on an Acer Aspire t671 desktop. After modding the bios to enable AHCI I put an 850 evo (runs at sata 2 speed) and a pretty basic Nvidia GFX card. The system turned super fast and runs Windows 10 perfectly fine. You don't need faster processors all you need is get rid of the HDDs.Anato - Saturday, July 30, 2016 - link
I'm still running AMD Athlon x2 4850 2.5GHz as a file server + MythTV box. It supports ECC, is stable and has enough grunt to do its job so why replace. Yes, I could get bit energy efficiency but in my climate >50% of time heating is needed and new hardware has its risks of compatibility issues etc.+10 for anandtech again, article was great as always!
serendip - Sunday, July 31, 2016 - link
I'm posting this on a Macbook with an E6600 2.4 GHz part. It's still rockin' after six years of constantly being tossed into a backpack. The comparisons between C2D and the latest i5 CPUs don't show how good these old CPUs really are - they're slow for hard number crunching and video encoding but they're plenty fast for typical workday tasks like Web browsing and even running server VMs. With a fast SSD and lots of RAM, processor performance ends up being less important.That's too bad for Intel and computer manufacturers because people see no need to upgrade. A 50% performance boost may look like a lot on synthetic benchmarks but it's meaningless in the real world.
artifex - Monday, August 1, 2016 - link
"With a fast SSD and lots of RAM, processor performance ends up being less important."I remember back when I could take on Icecrown raids in WoW with my T7200-based Macbook.
And I actually just stopped using my T7500-based Macbook a few months ago. For a couple years I thought about seeing if an SSD would perk it back up, but decided the memory bandwidth and size limitation, and graphics, was just not worth the effort. Funny that you're not impressed by i5s; I use a laptop with an i5-6200U, now. (Some good deals with those right now, especially if you can put up with the integrated graphics instead of a discrete GPU.) But then, my Macbooks were about 3 years older than yours :)
abufrejoval - Sunday, July 31, 2016 - link
Replaced three Q6600 on P45 systems with socket converted Xeon X5492 at $60 off eBay each. Got 3.4GHz Quads now never using more than 60 Watts under Prime95 (150 Watts "official" TDP), with 7870/7950 Radeon or GTX 780 running all modern games at 1080p at high or ultra. Doom with Vulkan is quite fun at Ultra. Got my kids happy and bought myself a 980 ti off the savings. If you can live with 8GB (DDR2) or 16GB (DDR3), it's really hard to justify an upgrade from this 10 year old stuff.Mobile is a different story, of course.
seerak - Monday, August 1, 2016 - link
My old Q6600 is still working with a friend.The laugher is that he (used to) work for Intel, and 6 months after I gave it to him in lieu of some owed cash, he bought a 4790K through the employee program (which isn't nearly as good as you'd think) and built a new system with it.
The Q6600 works so well he's never gotten around to migrating to the new box - so the 4790k is still sitting unused! I'm thinking of buying it off him. I do 3D rendering and can use the extra render node.
jeffry - Monday, August 1, 2016 - link
Thats a good point. Like, answering a question "are you willing to pay $800 for a new CPU to double the computers speed?" Most consumers say no. It all comes down to the mass market price.wumpus - Thursday, August 4, 2016 - link
Look up what Amazon (and anybody else buying a server) pays for the rest of the computer and tell me they won't pay $800 (per core) to double the computer's speed. It isn't a question of cost, Intel just can't do it (and nobody else can make a computer as fast as Intel, although IBM seems to be getting close, and AMD might get back in the "almost as good for cheap" game).nhjay - Monday, August 1, 2016 - link
The Core 2 architecture has served me well. Just last year I replaced my server at home which was based on a Core 2 Duo E6600 on a 965 chipset based motherboard. The only reason for the upgrade is that the CPU was having a difficult time handling transcoding jobs to several Plex clients at once.The desktop PC my kids use is Core 2 based, though slightly newer. Its a Core 2 Quad Q9400 based machine. It is the family "gaming" PC if you dare call it that. With a GT 730 in it, it runs the older games my kids play very well and Windows 10 hums along just fine.
Nameofuser44 - Wednesday, August 3, 2016 - link
Here I thought I was the only slow poke to not give up my C2D (4300) & ATI 5770 / 2GB ram /as a daily driver. Well here's to ten wonderful years!rarson - Thursday, August 4, 2016 - link
I'm still using a Core 2 Duo E8600 in my desktop. In an Abit P-35 Pro motherboard. The damn thing just works too well to get rid of, and I love the Abit board.rarson - Thursday, August 4, 2016 - link
Durr, it's the IP35 pro, P35 chipset.skidaddy - Friday, August 5, 2016 - link
My 10 year old E6600 with EVGA board & EVGA/NVIDIA 295 video card is also a great space heater. CUDA on card extended utility of set up. Only limitation is no CPU video decoding limits streaming to 1440. Waiting for the Intel Kaby Lake or better on die Intel GPU to be able to handle 4K @ 60fps over HDMI not USB3(+).BoberFett - Friday, August 5, 2016 - link
I'm still rocking my C2D E6500. It does the job.johnpombrio - Friday, August 5, 2016 - link
The Core 2 architecture was developed in Israel by a Intel team working on mobile processors. Intel suddenly realized that they had a terrific chip on their hands and ran with it. The rest is history.http://www.seattletimes.com/business/how-israel-sa...
FourEyedGeek - Monday, August 8, 2016 - link
How do you think one of those first Core processors would fare if fabricated at Intels 10nm process?Could they lower voltage or increase performance significantly?
Visual - Monday, August 8, 2016 - link
So a 10 year old chip is about half the performance of today's price equivalent. I'd have hoped today's tech to be more like 10 times better instead of just 2.