Comments Locked

22 Comments

Back to Article

  • IntelUser2000 - Friday, March 4, 2005 - link

    "why would you make a feature on a cpu that can only be enabled later anyways?"

    It may not make sense but apparently Pentium 4 had hyperthreading disabled for even Willamette. There are talks about unused "dark transistors" in Prescott. Why would they do that? Maybe because its not feasible, or easy, or cheap enough, and they want to enable it later. Like if Willamette had HT disabled, the performance degrade would have been significant, unlike today, which is negligible. But its easier to enable when its already there don't you think?

    "I thought we were talking about ways of increasing bandwidth to the cpu -- eg. intel does it by increasing the ram standard (ddr333 to ddr400), amd has now chosen to have the on die memory controller so as faster HTT will increase bandwidth between cpu and everything else"

    Specifically, bandwidth between dual processor and multi-processor and/or I/O. We are talking about desktops here so its only I/O. The CPU only need to talk to the memory controller for memory banwidth, and its integrated so it doesn't need HTT increase.

    You are kinda saying that if bus speeds increase in Pentium 4's, L2 cache bandwidth increases. That doesn't make sense at all. http://www.amd.com/us-en/Processors/ProductInforma...

    It even says at AMD for HTT: A system bus that uses HyperTransport technology for high-speed I/O COMMUNICATION.
  • Houdani - Friday, March 4, 2005 - link

    /snicker People talking to themselves is always good for a chuckle.
  • ncage - Friday, March 4, 2005 - link

    #20. I do agree with you that this would be a nightmare to code for unless they make the compiler so good that it does a majoritiy of the work for you which i can't imagine the compiler being THAT good. That would mean lots and lots of multithreaded programming which gets VERY complex. There are usually a few things you can spawn a new thread and process stuff in the background but for most applications, more than a few threads are not needed and deciding areas of your application that could be sped up with more threads becomes VERY complex. Take a for loop. Maybe every iteration in your for loop could be handled by a seperate process but what happenes if they are handled in order? What if you have the 3rd result before you have the 1st. This is a relatively simple example of course. Multithreaded programming becomes quite complex. High Multiprocessing becomes very useful in complex scientific appliations though. I also thing it would be quite useful in games

    On a side note i want to know what you have programmed like this? IF you have programmed something like i will be quite impressed.
  • fitten - Friday, March 4, 2005 - link

    Well... we have yet to see whether Cell will make it out of PS3s and IBM servers, though. Cell will be too complicated to program for regular programmers (I've programmed similar systems in the past) and Sony's paper launch claims that they've solved problems that no one has been able to solve yet... so... forgive me if I don't hold my breath waiting for Cell.
  • Warder45 - Friday, March 4, 2005 - link

    Yeah but now they have competition with CELL to get them moving on SPH.
  • mrmorris - Friday, March 4, 2005 - link

    "Intel has spoken a bit about including special purpose hardware in their forthcoming processors..."

    Yeah well, that's what they 6said back in the MMX days some 3600MHz ago!!
  • xsilver - Friday, March 4, 2005 - link

    "Dude, HTT is not memory standard, that's the link for the I/O, or in case of servers, communication between CPUs, get your facts straight. "

    I thought we were talking about ways of increasing bandwidth to the cpu -- eg. intel does it by increasing the ram standard (ddr333 to ddr400), amd has now chosen to have the on die memory controller so as faster HTT will increase bandwidth between cpu and everything else

    "Not exactly free since you need to buy the CPU, you can't just enable on current CPUs can you? :). "

    intel has the same thing, except you change the mobo instead of the cpu? how many people change the mobo without changing the cpu == answer, nobody .... why would you make a feature on a cpu that can only be enabled later anyways? its like your dad handing you the keys to a ferrari but then says, you can only drive it when you're 18 sonny boy :) ... why not just buy you the ferrari when you're 18?..... oh wait -- didnt intel just do it with their 64bit instructions on the prescott?

    from a performance perspective I still cant see a good argument for why intel is leaving out the on die controller.... its all economics of making more money from chipset sales

  • sphinx - Friday, March 4, 2005 - link

    I agree #11

    I think it is time to dump x86 altogether. Let's face it, Intel and AMD are still using the x86 architecture as a base for their new processors. I want to know if the CELL processor will change computing as we know it.
  • ceefka - Friday, March 4, 2005 - link

    Dedicated logic is nice when you can update it by flash (like BIOS). That can already be done by using FPGAs and CPLDs (like Xilinx). If too much becomes dedicated in a fixed way, without being low-cost upgradable the PC loses its versatility and attractiveness altogether.

    Can anyone remember what a PC was like ten years back in 1995? Who would have predicted then that we would have 64-bit capable CPU's on the brink of going dual core, 4GB capable mainboards, 300GB HDDs, LCD screens, and actually affordable RAM?

    When Intel adopts all these memory techniques so fast it's only logical that they are hesitant to produce a CPU with integrated mem controller.

    256MB on die RAM? That will be one expensive MF!
  • IntelUser2000 - Friday, March 4, 2005 - link

    "intel's reasoning doesn't make sense. they seem make people change mobos, not because of differing ram standards, but because they change cpu socket so damn often."

    Well, it makes sense at server side, specifically Xeon MP and Itanium, and according to some news that's what they are gonna do, since FB-DIMM will allow changing standards without changing chipsets or chip.

    " the memory controller on the AMD64 has already been updated from HTT800mhz to HTT1000mhz.... and can be continually revised and just introduced on newer steppings of the same cpu's.... eg. amd's forthcoming "e" spec with sse3, 4x ddr3200 support and other stuff for free"

    Dude, HTT is not memory standard, that's the link for the I/O, or in case of servers, communication between CPUs, get your facts straight.

    We don't know what's the max speed grade the memory controller on A64 will support. But the thing is if you want better memory standards than what the memory controller is capable of, you need newer versions, in this case newer CPU. Of course this does not apply to S423 to S478 and S478 to S775.

    "amd's forthcoming "e" spec with sse3, 4x ddr3200 support and other stuff for free"

    Not exactly free since you need to buy the CPU, you can't just enable on current CPUs can you? :).

    SSE3 is not related to integrated memory controller, 4xDDR3200 support was there already.
  • Doormat - Friday, March 4, 2005 - link

    I'm thinking its due to the fact that they make their own chipsets. Intel sells chipsets for $40 or so (MCH+ICH), and their uptake on new RAM technologies is quick (well, faster than AMD is, especially with DDR to DDR2). Plus the engineering cost. It doesnt add up. Unless and until they design a new chip from the ground up, an on die memory controller is a lot of work for not a lot of money. Unless they manage to fall far behind AMD in terms of performace, I dont think it'll show up.
  • sprockkets - Friday, March 4, 2005 - link

    Gee, let's do everything possible to improve the situation except, oh shit, ditch x86 code that was created around 30 years ago.
  • elecrzy - Friday, March 4, 2005 - link

    #6, intel's reasoning doesn't make sense. they seem make people change mobos, not because of differing ram standards, but because they change cpu socket so damn often.
  • mkruer - Friday, March 4, 2005 - link

    #7

    Sure, as if Intels CPU's are not expensive enough, now you want to all another $15 "Integration on die memory controller" tax
  • bersl2 - Friday, March 4, 2005 - link

    You know, I saw enough flashy graphics in three days to make my head spin, and there were enough pictures of the future to make me think this was a World's Fair. Though what can one expect out of an event like this?
  • xsilver - Thursday, March 3, 2005 - link

    #6 what you said doesnt make sense from a performance perspective.... how long does it take for new ram standards to come out? there has been sdram (pc66,100,133) ddr ram (pc2100,2700,3200) and now ddr2 (533) oh, and rambus 600,800,1000
    that's 10 ram standards for pcs as far back as the pentium 200.... the memory controller on the AMD64 has already been updated from HTT800mhz to HTT1000mhz.... and can be continually revised and just introduced on newer steppings of the same cpu's.... eg. amd's forthcoming "e" spec with sse3, 4x ddr3200 support and other stuff for free

    and #5 -- LOL -- so true -- AMD mobo's are so cheap, its not funny (not including the nforce4, but thats another issue)
    maybe intel could just charge the extra $15 on their cpu's :P
  • IntelUser2000 - Thursday, March 3, 2005 - link

    Well, Intel said they are not supporting integrated memory controller because you have to change the board and the memory and the CPU every time new RAM standards are out. Looking at desktops, that it make sense not to have memory controllers, but for servers they have a solution. Maybe because its more flexible to have a seperate memory controller? I mean you are pretty limited when the memory controller is integrated, in terms of clock speed scaling, increased complexity and memory standards. It makes sense for servers though and Intel recently announced the Xeon MPs and the Itaniums would have common sockets(same sockets) and have integrated memory controller.


    Anyways this was interesting:
    "The answer appears to be somewhere in between Pentium M and Prescott, realistically being much closer to Willamette's 20 stage integer pipeline than Prescott's 31 stage pipe, for strictly power reasons."

    See, like I predicted, its best to consume Pentium 4 Northwood and Pentium M together.
  • mkruer - Thursday, March 3, 2005 - link

    #4

    Na then Intel cant charge and addtional $15 per northbridge chip.
  • xsilver - Thursday, March 3, 2005 - link

    is there a more detailed reason as to WHY intel does not go with the on die memory controller?
    has AMD patented it and are unwilling to license it?

    hasnt the sucess of the amd64 physically shown that the memory controller is highly effective in improving performance?
  • alangeering - Thursday, March 3, 2005 - link

    "Although we're quite convinced that an on-die memory controller would result in the best performance per transistor expended on a new architecture, we're doubtful that Intel would consider one. We may have to wait until stacked die and wafer technology before we see any sort of serious reduction in memory latency through techniques other than more caches and more cores."

    Well noted, but a little expansion: the latency drop when going to a stacked die/wafer technology comes from 2 things.
    1. Proximity to core
    2. Intel will have to provide an on-die memory controller... to have an external controller and stacked wafer ram would be poor engineering.

    So, expect to see these things together.
  • RadeonGuy - Thursday, March 3, 2005 - link

    in soviet russia people learn to shutup the hell up
  • Brian23 - Thursday, March 3, 2005 - link

    In Soviet Russia, paper launches you!

Log in

Don't have an account? Sign up now