I understand that it's an old architecture meant for some entities that smehow haven't moved off Itanium. Even if we forget the cost... 130W tdp for a quad core 1.73 ghz?
I'd guess the actual power consumption is probably much lower (because the TDP compared to the much higher clocked, twice the core count versions is really incredibly high). Nevertheless, this is built on a 32nm process - you're not going to win any efficiency benchmarks using that... intel is lucky to still have a manufacturing plant for these new chips ;-).
Yeah I know that was just a joke (by the looks of it, intel operates fabs back to 65nm). But just to put this in context, Ivy Bridge (built on a 22nm process) was released nearly exactly 5 years ago. The last "big core" x86 chips built on 32nm was Sandy Bridge over 6 years ago.
Yeah, I was hoping 65 nm might be enough for Pescott to stretch its legs and hit the kind of clock speeds for which it was architected. Sadly, it was not.
Intel moved their chipsets to state-of-the-art processes around the time of Sandy/Ivy Bridge. This was to cut platform power for mobile systems. This is one reason how they have been able to produce 5W system packages for the Y series. It does look like Intel will not be as aggressive with this in the future due to difficulties of the newer nodes. The cost/benefit isn't there to migrate chipsets to a new node when they immediately come online but should sync when volume ramps up (they should be on the same node for Intel's 'architecture' and 'optimize' phase.)
The network adapters is true to an extent. Some of the high speed 100 Gbit equipment is kept on the newest process for both power consumption and performance concerns. While not state of the art in a raw transistor size sense, Intel does have a production silicon phonotics line going right now. That line is reportedly being used for transceiver chips for both Intel and Cisco. However, there are plenty of 1 Gbit and some 10 Gbit equipment that Intel makes on their old fabs. Intel's wireless chips are spread across a mix of nodes to balance performance, analog logic and power consumption factors. They also absorbed Texas Instrument's cable modem business a few years ago but there hasn't been enough time for products in that portfolio to be migrated to Intel's own fabs. It is not expected that those chips will receive aggressive treatment on the manufacturing side.
The scalable memory buffers for a time were kept a generation behind intentionally as they have some of the highest speed circuits Intel has devised internally in a chip (6.4 Ghz). Ironically, these are still used with the Itanium 9700. I suspect that the new memory buffers used in recent Xeon E7's have migrated to a new node as the link design has radically changed (originally it requires internal logic running at six times the base memory clock. 1067 Mhz -> 6.4 Ghz logic).
Yeah, funny how abusing a monopoly position to freeze out the competition results in great profitability. No surprise that companies continually do it and then pay a few paltry "fines".
The Itanium 9700 was supposed to offer some enhancements to the design to improve IPC ever so lightly. Originally it was to arrive on 22 nm as well but Intel moved it back to 32 nm mid stream. While they announced the process node change, they likely decided to dump the IPC improvements it was originally schedule to have and just focused on refining the last design.
Kinda funny that Itanium is still on 32 nm despite the rest of Intel's line up being on 14 nm and on track to ship 10 nm parts this year. Even without some IPC enhancements as promosied, a die shrink would have permitted higher clocks and/or more cores in the design. This would have been less embarrassing than the rebrand the few hold overs are getting.
This mimics the death of Alpha after all. The EV7 was replaced by the EV7z which only offered a similar clock speed bump after years of wait. The EV8 had already tapped out but HP put a stop to its release to focus on Itanium. The irony of HP being the last Itanium customer is not lost here.
And one more thing... it would have been interesting to see Itanium's original goal of socket compatibility with the Xeon E7's. This too was once an announced feature only to be killed off as interest in Itanium dwindled.
Intel did meet the goal of using a common chipset with Xeons for while. This makes me wonder if the few minor changes in the Itanium 9700 designs is to offer support some change in the chipsets. IE the 9700 series chips can be used in existing 9500/9300 series sockets but there are going to be a handful of boards that'll only work the 9700 series due to chipset changes.
It's a shame that. Microsoft always seems to be at least partly responsible for killing off a number of promising chips designs. By dropping, or refusing, support, it's the death knell.
Microsoft was only a small Itanium player by the time they exited. Itanium sales were dominated by HP-UX and NSK; Windows and Linux lagged far, far behind, along with VMS and (more distantly) mainframe operating systems.
And having worked with it for most of the last decade, I'm not sure how "promising" I'd call IPF. It introduced clever new solutions for problems that don't actually exist, while doing nothing but the same old "eh, just hoist loads" handwaving for the massive issues in-order processors have. At least IBM's in-order cores had run-ahead...
You should probably research the Itanic more before you go blaming Microsoft. They supported it as best as they could from 2001 to 2008. Sales were poor and the ecosystem was tiny. I'm surprised they supported it as long as they did.
There was far more Linux installations on IA-64 than there ever were Windows installations, but even RedHat killed RHEL for Itaniums long ago.
Itanium's job was to kill off big iron Unix, and it mostly succeeded. Alpha, SGI even Sparc are all dead or in the process of dying. Only IBM's POWER seems to have survived.
it's a funny exercise to ponder what would have happened if AMD had not created x86-64. Would Intel have finally made x86 64-bit? Or would we all be running some form of Itanium now? And would the compilers have finally figured out how to optimise for it?
-SGI was looking for the exits long before IPF shipped. Look at SPEC numbers for MIPS parts starting at, like, 1998; it wasn't pretty. SPARC is currently outselling Itanium by a significant margin and both Oracle and Fujitsu have future generations roadmapped.
-As far as I know, Intel had 64-bit programs at two different points - an internal 64-bit x86 program around 2000, and a 64-bit RISC design in the early 1990s (IAX) which was killed when Intel bought into HP's advanced processor effort (which became IPF)
-"Compilers optimizing for it" is an easy trope to trot out for IPF's failures, but it had fundamental issues that weren't just compiler trouble. In-order processors have a godawful time handling memory latency, especially when your access patterns aren't predictable. IBM approached this problem with runahead; the HP/Intel solution was just "schedule your loads earlier, dammit (and here's Advanced Loads and Speculative Loads to help out - at least on trivial code streams)" and it didn't work well at all. There are good reasons in-order microarchitecture is dead in high-end processors.
I agree that the writing was already on the wall, for most of the big iron. BTW, along with SPARC and POWER, PA-RISC seems to be one of the hold-outs. SPARC and now POWER are interesting cases, because they're open standards. I read some compelling speculation that Intel's legal department did as much to sink the Itanic as anything else did, by creating so many legal hurdles for would-be clones that anyone using it would be submitting to single-vendor lock-in.
As for in-order, it's not even found in the performance-optimized ARM cores or even Atom (since Silvermont). The only times it makes sense is in power-optimized designs and when you have boatloads of concurrency (i.e. GPUs). In fact, HD Graphics is probably the only current in-order Intel architecture.
PA-RISC is gone, folded into Itanium (which was, after all, originally designed as the long-term evolution of PA, starting about thirty years ago). PA-8900 (2004ish? it's been a while) was the end of the line.
Just to clarify - I'm fully aware IPF wasn't shipping thirty years ago, but R&D started around then (as PA-WideWord/Super Workstation). It was a long, hard road from there to Merced shipping (and landing with a resounding thud.)
For me, the interesting "what if" isn't if Itanium had better support, but if Intel hadn't killed Alpha and (indirectly) PA-RISC. x86-64 would still probably end up ruling the cloud, but maybe we'd have Alpha-powered cell phones?
Alpha had issues of its own. The Alpha 21364 was way ahead of everyone else in uncore (well, except K8, which got an IMC and a point-to-point SMP interconnect around the same time) but the ISA had its share of nastiness, especially prior to BWX.
PA-RISC didn't *die*, exactly. IPF was always intended by HP to be PA's evolution, and shares a lot of design concepts (relatively compact cores in sea of cache, no integer multiply...). IPF was a combination of PA design concepts with a massive bet on in-order processors with some degree of static scheduling being the future.
I find ISA family trees fascinating. What we're left with is the product of natural selection; the weak have died and the strong thrived. I'm always surprised SPARC is still with us; I doubt that any organisation that feels it needs to be using SPARC is taking the optimal approach to meet its requirements. At least POWER is a cracking platform for VMs.
SPARC is "open" only in a very limited sense. The last open-source core was the T2, which is a decade old (and fairly mediocre at the time.)
Power9 should do just fine against SKL-SP. Power8 beat Intel regularly early in its lifecycle. I wouldn't say it's fallen massively behind - on SPEC, 12-core P8 behaves similarly to 24-core Broadwell-EX iirc (both in the 900-950 per socket range on int_rate)
The second review shows P8 being more efficient, at least at MySQL, than Haswell-EP. For a 22nm CPU toward the end of its life (remember, Power8 has been around since 2014!), that isn't a bad place to be. My company runs some P8 and we're pretty happy with it.
On raw performance, that isn't anywhere close to the high end of Power8, which is the "Venice" SCM family used in the big E870/E880 systems - these chips top out at 12-core, 4GHz. What OpenPower gets is "Turismo", which is lower-binned "Venice" with lower clocks and half their memory controllers disabled.
Quote..."systems based on Itanium are advertised as high-uptime mission critical servers" ---------------------------------------------------------------------------------------------------------------------- So it would make sense that the first 10nm chips will be perfected for server platforms requiring high reliability and uptime before moving to the consumer market
mission critical systems for high altitude satellite/aircraft or other high radiation environments requiring protection from cosmic rays and such would need to be process-tweaked before moving to a high volume consumer systems
certain parts of the chip could be 10/14/22/32nm or whatever works best under adverse environments while maintaining reliability
But then banning consumer laptops on International flights would also solve that problem
Is this a troll? No, enterprise CPUs tend to lag, in terms of manufacturing process. Perhaps that's just down to the economics of manufacturing larger dies, however.
Chips for radiation-intense environments lag even further behind server CPUs.
"The main reason for Itanium was to run HP-UX and compete against big names, such as Oracle, using a new IA-64 instruction set." The Itanium was never designed to compete with Oracle. What would they compete against? Their database products? Neither Intel nor HP have significant database products. Hardware? Oracle only started in the hardware business after they assimilated Sun, long after Itanium was just barely on life support. Itanium was designed to compete with Alpha, IBM's POWER, Sun SPARC, and to eventually replace x86.
“One of Intel’s ventures into the historic mainframe space was Itanium.”
If it has been more successful, you might not be describing it this way. People were speculating it would eventually replace x86, because it could run x86 code, permitting a gradual transition. In practice, the price/performance of Itanium running 32 bit x86 code meant that no one would buy Itanium for that purpose. So if you needed to transition gradually to a 64 bit instruction set (rather than dumping all your old binaries at once), x86-64 was the way to go.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
50 Comments
Back to Article
XabanakFanatik - Thursday, May 11, 2017 - link
Interesting how the Introduction document header says 9700 series, but the list of models beneath are all 95xx models. Document typo?Ian Cutress - Thursday, May 11, 2017 - link
Parts of the document that mention the 9700 all seem to be copy paste of the 9500 sections.LauRoman - Thursday, May 11, 2017 - link
I understand that it's an old architecture meant for some entities that smehow haven't moved off Itanium. Even if we forget the cost... 130W tdp for a quad core 1.73 ghz?SarahKerrigan - Thursday, May 11, 2017 - link
A lot of the TDP is the uncore needed to support 8-socket glueless SMP.mczak - Thursday, May 11, 2017 - link
I'd guess the actual power consumption is probably much lower (because the TDP compared to the much higher clocked, twice the core count versions is really incredibly high).Nevertheless, this is built on a 32nm process - you're not going to win any efficiency benchmarks using that... intel is lucky to still have a manufacturing plant for these new chips ;-).
DanNeely - Thursday, May 11, 2017 - link
Old fabs live for several generations past when they stop being state of the art. For Intel it's mostly chipsets and (I'm assuming) network adapters.mczak - Thursday, May 11, 2017 - link
Yeah I know that was just a joke (by the looks of it, intel operates fabs back to 65nm).But just to put this in context, Ivy Bridge (built on a 22nm process) was released nearly exactly 5 years ago. The last "big core" x86 chips built on 32nm was Sandy Bridge over 6 years ago.
StevoLincolnite - Friday, May 12, 2017 - link
There were Pentium 4 chips built at 65nm... And the Interposer for High Bandwidth Memory was built at 65nm.mode_13h - Friday, May 12, 2017 - link
Yeah, I was hoping 65 nm might be enough for Pescott to stretch its legs and hit the kind of clock speeds for which it was architected. Sadly, it was not.Kevin G - Friday, May 12, 2017 - link
Intel moved their chipsets to state-of-the-art processes around the time of Sandy/Ivy Bridge. This was to cut platform power for mobile systems. This is one reason how they have been able to produce 5W system packages for the Y series. It does look like Intel will not be as aggressive with this in the future due to difficulties of the newer nodes. The cost/benefit isn't there to migrate chipsets to a new node when they immediately come online but should sync when volume ramps up (they should be on the same node for Intel's 'architecture' and 'optimize' phase.)The network adapters is true to an extent. Some of the high speed 100 Gbit equipment is kept on the newest process for both power consumption and performance concerns. While not state of the art in a raw transistor size sense, Intel does have a production silicon phonotics line going right now. That line is reportedly being used for transceiver chips for both Intel and Cisco. However, there are plenty of 1 Gbit and some 10 Gbit equipment that Intel makes on their old fabs. Intel's wireless chips are spread across a mix of nodes to balance performance, analog logic and power consumption factors. They also absorbed Texas Instrument's cable modem business a few years ago but there hasn't been enough time for products in that portfolio to be migrated to Intel's own fabs. It is not expected that those chips will receive aggressive treatment on the manufacturing side.
The scalable memory buffers for a time were kept a generation behind intentionally as they have some of the highest speed circuits Intel has devised internally in a chip (6.4 Ghz). Ironically, these are still used with the Itanium 9700. I suspect that the new memory buffers used in recent Xeon E7's have migrated to a new node as the link design has radically changed (originally it requires internal logic running at six times the base memory clock. 1067 Mhz -> 6.4 Ghz logic).
tipoo - Thursday, May 11, 2017 - link
So is Intels Itanium 'flop' still single handedly bigger than all of AMDs cashflow?ImSpartacus - Thursday, May 11, 2017 - link
Oh god, I always get sad over these kinds of comparisons. Poor amd...Nagorak - Thursday, May 11, 2017 - link
Yeah, funny how abusing a monopoly position to freeze out the competition results in great profitability. No surprise that companies continually do it and then pay a few paltry "fines".boozed - Thursday, May 11, 2017 - link
And the hype, oh the hype...That graph on Wikipedia of progressive Itanic sales forecasts almost says it all.
Kevin G - Thursday, May 11, 2017 - link
The Itanium 9700 was supposed to offer some enhancements to the design to improve IPC ever so lightly. Originally it was to arrive on 22 nm as well but Intel moved it back to 32 nm mid stream. While they announced the process node change, they likely decided to dump the IPC improvements it was originally schedule to have and just focused on refining the last design.Kinda funny that Itanium is still on 32 nm despite the rest of Intel's line up being on 14 nm and on track to ship 10 nm parts this year. Even without some IPC enhancements as promosied, a die shrink would have permitted higher clocks and/or more cores in the design. This would have been less embarrassing than the rebrand the few hold overs are getting.
This mimics the death of Alpha after all. The EV7 was replaced by the EV7z which only offered a similar clock speed bump after years of wait. The EV8 had already tapped out but HP put a stop to its release to focus on Itanium. The irony of HP being the last Itanium customer is not lost here.
Kevin G - Thursday, May 11, 2017 - link
And one more thing... it would have been interesting to see Itanium's original goal of socket compatibility with the Xeon E7's. This too was once an announced feature only to be killed off as interest in Itanium dwindled.Intel did meet the goal of using a common chipset with Xeons for while. This makes me wonder if the few minor changes in the Itanium 9700 designs is to offer support some change in the chipsets. IE the 9700 series chips can be used in existing 9500/9300 series sockets but there are going to be a handful of boards that'll only work the 9700 series due to chipset changes.
Reflex - Thursday, May 11, 2017 - link
Man I miss the Alpha. I still have a functioning 533Mhz Alpha CPU/board at home. No use for it though.melgross - Thursday, May 11, 2017 - link
It's a shame that. Microsoft always seems to be at least partly responsible for killing off a number of promising chips designs. By dropping, or refusing, support, it's the death knell.Ian Cutress - Thursday, May 11, 2017 - link
Support is hard. Time, people, resources. Easier to justify droppping as market share dwindles.SarahKerrigan - Thursday, May 11, 2017 - link
Microsoft was only a small Itanium player by the time they exited. Itanium sales were dominated by HP-UX and NSK; Windows and Linux lagged far, far behind, along with VMS and (more distantly) mainframe operating systems.And having worked with it for most of the last decade, I'm not sure how "promising" I'd call IPF. It introduced clever new solutions for problems that don't actually exist, while doing nothing but the same old "eh, just hoist loads" handwaving for the massive issues in-order processors have. At least IBM's in-order cores had run-ahead...
mode_13h - Thursday, May 11, 2017 - link
GPUs serve as an example of what it takes to do in-order well. You just need workloads with enough concurrency... that's all.SarahKerrigan - Thursday, May 11, 2017 - link
GPUs aren't running anything close to workloads as generalized as IPF was.For servers, latency matters.
mode_13h - Thursday, May 11, 2017 - link
I get that. I'm just pointing out what you have to do to your workload for in-order to make sense in throughput-optimized contexts.mode_13h - Thursday, May 11, 2017 - link
BTW, thanks for dropping your knowledge on us. It's interesting to hear the perspectives of insiders.Alexvrb - Thursday, May 11, 2017 - link
You should probably research the Itanic more before you go blaming Microsoft. They supported it as best as they could from 2001 to 2008. Sales were poor and the ecosystem was tiny. I'm surprised they supported it as long as they did.SarahKerrigan - Thursday, May 11, 2017 - link
Yep. Microsoft's IPF involvement was almost entirely driven by large MSSQL installations that wanted the extra RAM and bandwidth IME.aryonoco - Thursday, May 11, 2017 - link
There was far more Linux installations on IA-64 than there ever were Windows installations, but even RedHat killed RHEL for Itaniums long ago.Itanium's job was to kill off big iron Unix, and it mostly succeeded. Alpha, SGI even Sparc are all dead or in the process of dying. Only IBM's POWER seems to have survived.
it's a funny exercise to ponder what would have happened if AMD had not created x86-64. Would Intel have finally made x86 64-bit? Or would we all be running some form of Itanium now? And would the compilers have finally figured out how to optimise for it?
SarahKerrigan - Thursday, May 11, 2017 - link
Couple things.-SGI was looking for the exits long before IPF shipped. Look at SPEC numbers for MIPS parts starting at, like, 1998; it wasn't pretty. SPARC is currently outselling Itanium by a significant margin and both Oracle and Fujitsu have future generations roadmapped.
-As far as I know, Intel had 64-bit programs at two different points - an internal 64-bit x86 program around 2000, and a 64-bit RISC design in the early 1990s (IAX) which was killed when Intel bought into HP's advanced processor effort (which became IPF)
-"Compilers optimizing for it" is an easy trope to trot out for IPF's failures, but it had fundamental issues that weren't just compiler trouble. In-order processors have a godawful time handling memory latency, especially when your access patterns aren't predictable. IBM approached this problem with runahead; the HP/Intel solution was just "schedule your loads earlier, dammit (and here's Advanced Loads and Speculative Loads to help out - at least on trivial code streams)" and it didn't work well at all. There are good reasons in-order microarchitecture is dead in high-end processors.
mode_13h - Thursday, May 11, 2017 - link
I agree that the writing was already on the wall, for most of the big iron. BTW, along with SPARC and POWER, PA-RISC seems to be one of the hold-outs. SPARC and now POWER are interesting cases, because they're open standards. I read some compelling speculation that Intel's legal department did as much to sink the Itanic as anything else did, by creating so many legal hurdles for would-be clones that anyone using it would be submitting to single-vendor lock-in.As for in-order, it's not even found in the performance-optimized ARM cores or even Atom (since Silvermont). The only times it makes sense is in power-optimized designs and when you have boatloads of concurrency (i.e. GPUs). In fact, HD Graphics is probably the only current in-order Intel architecture.
SarahKerrigan - Thursday, May 11, 2017 - link
PA-RISC is gone, folded into Itanium (which was, after all, originally designed as the long-term evolution of PA, starting about thirty years ago). PA-8900 (2004ish? it's been a while) was the end of the line.SarahKerrigan - Thursday, May 11, 2017 - link
Just to clarify - I'm fully aware IPF wasn't shipping thirty years ago, but R&D started around then (as PA-WideWord/Super Workstation). It was a long, hard road from there to Merced shipping (and landing with a resounding thud.)mode_13h - Friday, May 12, 2017 - link
Huh. I was sure I'd read about another generation of PA-RISC CPUs, just a few years ago.Well, they were good in their day. I always remember them posting up some of the top SPEC numbers.
mode_13h - Friday, May 12, 2017 - link
For me, the interesting "what if" isn't if Itanium had better support, but if Intel hadn't killed Alpha and (indirectly) PA-RISC. x86-64 would still probably end up ruling the cloud, but maybe we'd have Alpha-powered cell phones?SarahKerrigan - Friday, May 12, 2017 - link
Alpha had issues of its own. The Alpha 21364 was way ahead of everyone else in uncore (well, except K8, which got an IMC and a point-to-point SMP interconnect around the same time) but the ISA had its share of nastiness, especially prior to BWX.PA-RISC didn't *die*, exactly. IPF was always intended by HP to be PA's evolution, and shares a lot of design concepts (relatively compact cores in sea of cache, no integer multiply...). IPF was a combination of PA design concepts with a massive bet on in-order processors with some degree of static scheduling being the future.
As it turned out, it wasn't.
Meteor2 - Friday, May 12, 2017 - link
I find ISA family trees fascinating. What we're left with is the product of natural selection; the weak have died and the strong thrived. I'm always surprised SPARC is still with us; I doubt that any organisation that feels it needs to be using SPARC is taking the optimal approach to meet its requirements. At least POWER is a cracking platform for VMs.mode_13h - Friday, May 12, 2017 - link
Well, SPARC is open. I don't know how much that has to do with it.It's a shame to see POWER fall behind x86-64. I wonder if it'll ever regain the performance crown.
SarahKerrigan - Friday, May 12, 2017 - link
SPARC is "open" only in a very limited sense. The last open-source core was the T2, which is a decade old (and fairly mediocre at the time.)Power9 should do just fine against SKL-SP. Power8 beat Intel regularly early in its lifecycle. I wouldn't say it's fallen massively behind - on SPEC, 12-core P8 behaves similarly to 24-core Broadwell-EX iirc (both in the 900-950 per socket range on int_rate)
mode_13h - Saturday, May 13, 2017 - link
I wasn't talking about open source. Just that it's an open standard. See http://sparc.orgAs for POWER8, I had these in mind:
http://www.anandtech.com/show/9567/the-power-8-rev...
http://www.anandtech.com/show/10539/assessing-ibms...
...where it seemed pretty clear that Power8 was less efficient than comparable Intel offerings.
SarahKerrigan - Saturday, May 13, 2017 - link
The second review shows P8 being more efficient, at least at MySQL, than Haswell-EP. For a 22nm CPU toward the end of its life (remember, Power8 has been around since 2014!), that isn't a bad place to be. My company runs some P8 and we're pretty happy with it.On raw performance, that isn't anywhere close to the high end of Power8, which is the "Venice" SCM family used in the big E870/E880 systems - these chips top out at 12-core, 4GHz. What OpenPower gets is "Turismo", which is lower-binned "Venice" with lower clocks and half their memory controllers disabled.
willis936 - Saturday, May 13, 2017 - link
Or low speed embedded systems that have sub 20 cycle memory. Don't forget that there are still more microwaves than servers.aryonoco - Friday, May 12, 2017 - link
Thank you for your corrections and insightful comment.mode_13h - Friday, May 12, 2017 - link
Yeah, definitely thanks @SarahKerrigan.vladx - Friday, May 12, 2017 - link
Is that Sarah Kerrigan from Microsoft?SarahKerrigan - Friday, May 12, 2017 - link
Not that I know of.Bullwinkle J Moose - Friday, May 12, 2017 - link
Quote..."systems based on Itanium are advertised as high-uptime mission critical servers"----------------------------------------------------------------------------------------------------------------------
So it would make sense that the first 10nm chips will be perfected for server platforms requiring high reliability and uptime before moving to the consumer market
mission critical systems for high altitude satellite/aircraft or other high radiation environments requiring protection from cosmic rays and such would need to be process-tweaked before moving to a high volume consumer systems
certain parts of the chip could be 10/14/22/32nm or whatever works best under adverse environments while maintaining reliability
But then banning consumer laptops on International flights would also solve that problem
mode_13h - Saturday, May 13, 2017 - link
Is this a troll? No, enterprise CPUs tend to lag, in terms of manufacturing process. Perhaps that's just down to the economics of manufacturing larger dies, however.Chips for radiation-intense environments lag even further behind server CPUs.
Sivar - Friday, May 12, 2017 - link
"The main reason for Itanium was to run HP-UX and compete against big names, such as Oracle, using a new IA-64 instruction set."The Itanium was never designed to compete with Oracle. What would they compete against? Their database products? Neither Intel nor HP have significant database products. Hardware? Oracle only started in the hardware business after they assimilated Sun, long after Itanium was just barely on life support.
Itanium was designed to compete with Alpha, IBM's POWER, Sun SPARC, and to eventually replace x86.
mode_13h - Saturday, May 13, 2017 - link
It's pretty obvious the author mentally conflated Sun and Oracle.speculatrix - Sunday, May 14, 2017 - link
I'm very sad at this. One platform we have to support is HP-UX in Itanium, and I was hoping we could let it die, forgotten and not missed.KAlmquist - Tuesday, May 16, 2017 - link
“One of Intel’s ventures into the historic mainframe space was Itanium.”If it has been more successful, you might not be describing it this way. People were speculating it would eventually replace x86, because it could run x86 code, permitting a gradual transition. In practice, the price/performance of Itanium running 32 bit x86 code meant that no one would buy Itanium for that purpose. So if you needed to transition gradually to a 64 bit instruction set (rather than dumping all your old binaries at once), x86-64 was the way to go.