Comments Locked

27 Comments

Back to Article

  • jordanclock - Thursday, March 8, 2018 - link

    His name was Intel Poulson.
  • Amandtec - Friday, March 9, 2018 - link

    Ha. You beat me to it.
  • Elstar - Thursday, March 8, 2018 - link

    The 90s were quite something. Superscalar and out-of-order CPUs were starting to go mainstream, and people could reasonably disagree about whether VLIW (including IA64) was where the future was headed. I even remember the trade magazines rumoring about Apple experimenting with VLIW "TriMedia" coprocessors to counter Intel.

    The failure of IA64 is one for the history books. They really mis-predicted where the future was headed in so many ways, both technically and competitively.
  • Elstar - Friday, March 9, 2018 - link

    Oh, and let's forget that Intel fundamental miscalculation wasn't even IA64 specific. In that era, they bet/hoped that programmers and/or compilers could scale up to very wide (IA64) or very deep pipelines (the P4). Both approaches demoed well but were terribly impractical for most real code.

    Intel had to restart their CPU design using the older P3 to create the Pentium M and ultimately what we see/know today (plus more SIMD and multicore).
  • name99 - Friday, March 9, 2018 - link

    Uhh, not quite.
    Trimedia was a PHILLIPS-designed VLIW.
    https://en.wikipedia.org/wiki/TriMedia_(mediaproce...

    Apple looked at it in a very vague way (in that you'd expect any large company to look at every new type of hardware) but never with the goal of it being a CPU, rather with it possibly being a media acceleration card (essentially a fancier replacement for the [VERY rarely used] Apple MPEG card
    https://manuals.info.apple.com/MANUALS/1000/MA1446... )

    Apple likewise looked at Cell (more generally this time, with media in mind but also as the base CPU in a machine). And in both cases it was concluded that the additional hassle you had to go through for these non-standard designs was not worth the payoff.

    (Nowadays, of course, the calculations are very different because an additional design is simply a few sq mm on a SoC, and everyone gets it; we're not talking a $1000 card that 10,000 people buy. So everyone has GPUs, everyone with a QC phone has a VLIW DSP, and soon enough everyone will have some flavor of NPU.

    But the Apple calculation for the past 10 years has been that the space where VLIW can do well --- very structured algorithms and data --- is, in the real world, better handled by dedicated silicon. So rather than a VLIW there is a baseband DSP, an H.264/5 ASIC, an ISP.

    Maybe that will change, but so far the timing never really worked out for VLIW. IF something like TriMedia could have been shipped with every Mac at a reasonable price, it might have made a great foundation for QuickTime in the 90s, back when QT was all about dozens of different codecs.)

    In the case of IA64 there was so much fail in so many ways that it's hard to choose just one issue.
    Clearly the starting point was kinda broken --- general purpose CPUs need to handle irregular data, which means the pain of a cache system and out-of-order load/stores and then instructions, which means you might as well go full OoO, and then adding superscalar is not hard. So the VLIW buys you fsckall.
    But on top of that Intel insisted on adding every piece of craziness they could invent, to ensure that no-one else could copy it. They were so obsessed with this particular goal that they forgot to ask whether anyone would WANT to copy it...
  • mode_13h - Saturday, March 10, 2018 - link

    Imagine GPUs didn't happen. Then, people would be praising Intel's foresight for going "VLIW" (hint: it's not really VLIW). Assuming, of course, they continued to refine it to the same extent as they have x86-64.

    VLIW *did* win the day - in DSPs and GPUs (for a while, at least). Just not the desktop.
  • mode_13h - Saturday, March 10, 2018 - link

    BTW, my point was they weren't wrong in predicting what would be the most demanding compute loads - they were just wrong in predicting how those would be handled. As a CPU company, they saw AI and graphics as challenges to be solved by the CPU, and picked an architecture with notable strength in those areas.
  • name99 - Saturday, March 10, 2018 - link

    I think you are being too kind. The nature and future of GPUs was already visible at the time Itanium was released, and the target market (servers, NOT desktop --- and Intel had no obvious plans to change that, and did nothing to prepare the desktop market for Itanium) was not where media or graphics mattered. (AI, forget it --- irrelevant in this time frame.)

    Itanium was supposed to be Intel's SERVER architecture, to compete against Alpha, SPARC, POWER, MIPS, etc. And it was ATROCIOUSLY designed for that particular job.
  • mode_13h - Sunday, March 11, 2018 - link

    The architecture of IA64 was probably locked-in before Nvidia even shipped the NV1. The term GPU wouldn't be coined for another 4-5 years, at least. Graphics chips in the mid-90's were very simple and hard-wired affairs.

    Where Intel was successful with chips like the i860 probably informed some of their ideas about where IA64 might find heavy compute loads. Remember, SGI was booming back then, and there was a substantial pack of also-rans, trying to make a go of the graphics minicomputer/workstation market.

    And Intel was building Neural Network chips as back in 1993. Analog, but still...

    There's no doubt Intel had ambitions for IA64 to rule the desktop, as well. This is why they included an x86 emulator and pushed MS to port Windows to it.
  • name99 - Saturday, March 10, 2018 - link

    But Intel wasn't selling Itanium as a GPU or DSP, they were selling it as a CPU!
    No-one denied (back then or today) that VLIW has advantages in areas where the code and data structures are very regular -- I said that above.
    The problem is that CPUs are designed to solve problems where code and data are irregular.

    This is not an argument about whether diesel or gasoline engines would win the car market; it's an argument about whether motorbikes or trucks would win the container hauling market. One of those arguments could go either way; one of them has an obviously STUPID answer.
  • mode_13h - Sunday, March 11, 2018 - link

    point = missed.

    I'm saying Intel was looking to solve all computing problems inside their CPUs. And if we were still reliant on CPUs for these things, they'd have had a point. But it turned out to be much easier for sound cards to include DSPs and graphics cards to include ever more sophisticated & programmable ASICs.
  • mode_13h - Saturday, March 10, 2018 - link

    Lol. That must've been some tech journalist not knowing what they were talking about. TriMedia was only an embedded platform. It made no attempt at upward-compatible ISA, nor do I think it had a MMU (not that Apple even used the one they had, back then).

    If they were looking at it for any reason, then it would've been as an accelerator chip for audio & potentially graphics processing. They *did* have a lot of floating point horsepower, but I'm not sure they were ever that much faster (if at all) than the Intel or PPC CPUs of their day. They'd have excelled only on price/perf or perf/W.

    I remember even reading about concerns over VLIW running out of steam, due to the networking problems involved in moving data between ever growing register files and large numbers of execution units. Enter ideas like Transport-Triggered Architectures.

    Say, did any ideas from asynchronous computing architectures ever go mainstream? Or, are modern CPU cores still fully synchronous?
  • name99 - Saturday, March 10, 2018 - link

    Uhh, when Apple was looking at Trimedia we were already using PPC.
    I'll be the first to admit that VM under old-school macOS wasn't going to win any awards for elegance or performance, but it was there and did work.

    The real issue I wanted to clarify was that Trimedia was looked at as a media co-processor, not as a CPU replacement.
    And yeah, a SUBSTANTIAL concern (which proved correct) was that mainstream CPUs at that time were improving so fast that a dedicated SW stack for a media co-processor made little sense --- the speed advantage today would be gone next year, and the co-processor would be slower than the CPU the year after that. No-one (correctly) expected these weird non-mainstream designs to upgrade any faster than every four or five years.
  • mode_13h - Sunday, March 11, 2018 - link

    Just because they were using PPC, you brazenly assume they were using VM? Nooo...

    Even through MacOS 8, VM was disabled by default. Memory corruption and hard-crashes were a daily affair, for serious Mac users. The company I worked for built hardware/software mainly for Mac users, at the time. It was a pretty dismal affair.

    TriMedia was good for its price and power envelope. Nice little embedded chip found a lot of uses. The worst thing was they failed to stay on the same performance curve as the big CPUs, so it faded into obscurity.
  • Elstar - Sunday, March 11, 2018 - link

    If you read my original post about TriMedia, I used the word "coprocessor". I don't think anybody in this forum is seriously arguing that TriMedia was ever a realistic CPU replacement. (I also never said that TriMedia was an Apple design, but I digress.)

    It is sometimes hard to predict sometimes which "weird non-mainstream designs" might turn out to have huge hidden demand once they are reasonably available. FPUs were once weird/non-mainstream chips. So were MMUs. So were GPUs. Now TPUs / neural-nets are the trendy dedicated-chip specialization that is getting integrated into mainstream chips.

    Also, unlike the TriMedia days, weird and constantly changing instruction set architectures aren't a dealbreaker anymore. GPUs have proven that translation of partially compiled code to machine code can work quite well.
  • mode_13h - Sunday, March 11, 2018 - link

    Even coprocessor implies some level of application programmability. Of course it's not an Apple design.

    I think FPUs were never weird, per se. Intel had one for the 8086 - the 8087 - dating back to 1980. The use of floating point (and its implementation in hardware) goes back long before that.

    GPUs provide a useful admonition that it's not the destiny of all discrete processing elements to get integrated into the CPU.

    The same will necessarily hold for neural network accelerators. Sure, you might get token support, the same way you get a token iGPU, but they will never be comparable to the capabilities of the discrete chips.

    > GPUs have proven that translation of partially compiled code to machine code can work quite well.

    This is their secret weapon. It still has yet to be de facto for general-purpose, although MS has probably done more than anyone to make that a reality. On a related note, see Web Assembly.
  • Elstar - Monday, March 12, 2018 - link

    Heh. Floating point hardware is decades older than the 8087. Before the 8087 and IEEE 754, FPUs all behaved in subtlety different ways that were a pain to debug. If that isn't the definition of "weird", then I don't know what is.

    I wouldn't dismiss iGPUs as token. For 99.99% of people, SoCs are good enough. The fact that discrete GPUs still exist at all is proof that people are *finding* problems to solve with discrete GPUs.

    Also, if you care about performance, then WebAssembly is awful. It is basically JavaScript byte code, and JavaScript was not designed for performance. Yes, web browsers have done an amazing job making JavaScript run fast, but making a terrible design run fast will never be as good as designing something to be fast from the start.
  • mode_13h - Tuesday, March 13, 2018 - link

    I never said there was nothing weird about FP hardware, just reacting to the idea of having FP in hardware. That's what you seemed to be saying was weird.

    As for iGPUs, the point was they don't eliminate the need for dGPUs, the same way that iFPUs killed dFPUs. It's not a trend, as you seemed to be implying.

    I don't know if you've had your head in the sand, but the market for dGPUs has been white hot. Doesn't matter whether it's because of gaming, VR, crypto, or what. The point is the same - if iGPUs really were a complete replacement, then this wouldn't be so.

    > WebAssembly

    You clearly don't know what you're talking about. The initial version has some annoying limitations (e.g. 64k pages), but there's no good reason why C++ (for example) compiled to WebAssembly can't approach comparable performance to natively-compiled C++.

    Compiling to WebAssembly avoids most of the performance limitations encountered when compiling to JavaScript. It was designed to compete with native compiled code, and is lower-level (like true asm) than Java's Forth-based machine model.
  • Johan Steyn - Friday, March 16, 2018 - link

    I believe the main reason Itanium failed, was because of AMD64. AMD brought very powerful Opterons, which caught Intel off guard. Intel then was forced to accept AMD64 and rebrand it. MS made it clear that they would not support another version of x86 64.

    It is strange that "puny" companies can change the coarse of history. Yes there were other factors as well, but AMD64 was for sure the catalyst. Intel began developement on x86-64 Xeons which soon surpassed the Itaniums in performance.
  • boozed - Thursday, March 8, 2018 - link

    And the Itanic finally slips below the waves
  • Threska - Saturday, March 10, 2018 - link

    Could have saved Intel from MELTDOWN.
  • SarahKerrigan - Thursday, March 8, 2018 - link

    "got 4-way Hyper-Threading"

    No it didn't. What it got was "dual-domain multithreading," which isn't the same thing, but was rather part of Poulson's somewhat decoupled frontend and backend shift. In terms of visible threads, it remained two-per-core SoEMT, just like every other IPF core since Montecito.
  • Dragonstongue - Thursday, March 8, 2018 - link

    everything has a time for it to "end"
    seems they (Intel) push as little boundaries as possible to stay relevant and instead do the minimal amount possible to ensure continued sales, makes me wonder what the high tech industry will look like in 2021-2025 type deal, make me wonder even more had Intel NOT shafted AMD the way they did for as many years they did (and still do to a point) how substantial the computing power we would now be enjoying rather than the small 1/2 step forward every once in awhile we have been seeing the past decade or so (chips are wicked fast vs what used to be granted, but they also seem to only be pushing very small steps ahead per annum realistically speaking)

    They ALL should be more like mobile design concept, use the least amount of power possible and get the absolute performance that can be achieved, can always jump die size or ramp clock speeds after they fine tune for lowest potential power use (especially the average and sustained clock/power many seem to forget about)

    VLIW was used at least from what I know about more on a consumer level with Radeons up to the 6900 series VLIW 4, previous was VLIW 5 (I believe GCN still is very akin to the principle though not quite talked about as such) they had a crap ton of performance available IF the software was to take advantage of it, but, like anything it seems devs etc will use the least amount of work possible to get things going, or we would not be needing gpu/cpu with access to many gb of memory and thousands of shaders in the first place ^.^
  • Flunk - Thursday, March 8, 2018 - link

    They could discontinue these tomorrow and very few people would care. It's those enterprise availability contracts keeping it going even this long.
  • Sivar - Friday, March 9, 2018 - link

    Itanium is still the only game in town for running VMS, at least until the VMS port to AMD64 is complete.
  • Kevin G - Saturday, March 10, 2018 - link

    Well there is Alpha is you want to use even more dated hardware.
  • Johan Steyn - Friday, March 16, 2018 - link

    Amazing to still hear about these Itanics. This was probably the mostly costly fail in the history of tech.

Log in

Don't have an account? Sign up now