Comments Locked

17 Comments

Back to Article

  • LoneWolf15 - Friday, August 26, 2005 - link

    I think the Network Manageability Engine is a great concept --but only if it can be updated if necessary through some sort of PROM setup or other option. As resilient as it seems today, someone will eventually find a way around it. At that point, if the hardware can't be reprogrammed with updates to meet the threat, it will be useless and instantly obsolete.
  • Regs - Friday, August 26, 2005 - link

    I had the same general reaction as all of you guys. What's the use of all this crap? However I guess it's just something Intel wants to show where the future might lead. I just hope AMD are the ones holding the baton.
  • mkruer - Thursday, August 25, 2005 - link

    Is it my imagination, or is this years IDF very lackluster (Lacking brightness, luster, or vitality; dull)? It seems to me that none of the hardware they are offering, I am remotely interested it. About the only thing that I saw that was interesting was the virtualization technology, but even that will take years to come into the main stream. This for me will go down as one of the more mundane conferences. So far I have seen a lot of hype and a bunch of pretty slides.
  • mikecel79 - Thursday, August 25, 2005 - link

    Running demos of Intel's next generation hardware is not impressive? Lots of 65nm chips running isn't impressive?
  • Kensei - Friday, August 26, 2005 - link

    I agree. I don't know what's not to like here. This is a peek at the future, not a peek at next week. Multi-core, hyperthreading, etc. have huge implications for software engineering and computer science in general regarding how and when to best "divide and bring back together" various computational processes. It adds great complexity to the software engineering field at a time when they currently have difficulty writing code that isn't buggy and/or easily exploited. I think this will make architecting (is that a word?) software, before writing the code, an even more improtant step. Unfortunately, software architecture is something not often taught either on-the-job or in universities.

    What I find intersting about this whole "diamond" thing is why is Intel interested in this sort of stuff at all? It seems much more suited to the type of research being done by MS or at universities. I may be missing something, but what's hardware architecture got to do with identifying people in pictures? Is Intel planning on entering the software development world also?
  • PrinceGaz - Tuesday, August 30, 2005 - link

    Yeah, Diamond does seem a totally software related project, and unless Intel code it to only run on their processors (which they probably would), it would work just as well on AMD chips.

    I suppose Intel have made a few video codecs in the past which were quite well used; maybe they are planning on doing something like that again, but restrict their use to Intel chips this time?
  • mkruer - Friday, August 26, 2005 - link

    Intel's next generation is the problem. After the Prescott debacle, Intel made a knee jerk reaction, even looking at some of the benchmarks for the Yonah and Sossaman (what a joke).

    My predictions for 2006/2007

    1. Intel will go core ballistic with there “Performance per watt” and run into the opposite mistake of the “extreme gigahertz” i.e. they will have lots of slower cores, but they can’t be effectively be used by common x86 applications.
    2. VT will be rushed out the door, and be quickly replaced by VT2

    I have been hearing that Intel will be no threat to AMD for 2006, and after reading all the tech sites, it looks like that is 100% correct. Intel for all there “new technology” is still play catch-up.

    Like I said the only two really interesting things are the VT and new “dynamic” L2 cache that are going to be used for power savings. That’s about it.

    BTW to date I think Transmeta’s Coruso chip has the recorded for “Performance per watt” running the equivalent of a 500Mhz cpu at 1 watt
  • CSMR - Friday, August 26, 2005 - link

    You have to take in to account that power is something like voltage^2*speed and voltage can be reduced if speed is reduced, meaning that performance/watt is not a good metric; something between performance/watt and performance^3/watt would be better.
  • ElFenix - Friday, August 26, 2005 - link

    performance per watt is very important for a lot of customers. anywhere that has a huge amount of cores running will love this, especially as energy prices go up. datacenters are already paying attention to this, their AC bills alone would make most places choke. many customers need nothing more than a 1 ghz p3 to run their email, word, and powerpoint. to get these people to upgrade will need a serious focus on TCO, and power management is going to be a huge part of that.
  • mkruer - Friday, August 26, 2005 - link

    But there in lies the problem. With this massively multi core approach, it will show huge “Performance per watt”, but only in a very massively parallel environments. The good news is that most server applications will be able to use multi threads, bad news is that having multi threads does not mean there is a 1 to 1 increase per core. In real life the maximum number of cores that would be utilized to full extents is around 4, because there are too many process that require the outcome of the original process.

    What is required is a balance, and so far from what I have seen, intel is not gunning for balance, but yet another PR stunt.
  • joex444 - Friday, August 26, 2005 - link

    Kind of like taking a car that gets 45mpg but does 0-60 in 14 seconds over the V8, gotcha.
  • 4AcesIII - Thursday, August 25, 2005 - link

    And suddenly Anandtech becomes less and less impressive as a serious review page let alone unbiased. Some failed fake-ality show producer musta wormed his way into the Anandtech staff because I've seen Olympic skiers go downhill slower :) than the credibility of this page.
  • JarredWalton - Thursday, August 25, 2005 - link

    Apparently the heuristics are smart enough to not pick up on something like BitTorrent as opening/closing too many connections at a time? That's interesting if true, as many peer-to-peer clients seem like they would trip the network activity threshhold. "Do no harm." I suppose if you asked the RIAA/MPAA, shutting down peer-to-peer wouldn't be harmful in the least.... ;)
  • mikecel79 - Thursday, August 25, 2005 - link

    Most P2P clients don't open up as many connections as a virus would. An average virus would open up 100s of connections a second, where as a P2P client probably does 3-5 connections a second, if that many.
  • joex444 - Friday, August 26, 2005 - link

    And what if the sneaky pesty virus writers figure out the threshold, and write their virus to not exceed it? Wouldn't Intel's anti-worm thing think it's just a really active legit prog?
  • KristopherKubicki - Thursday, August 25, 2005 - link

    A program like Tor might.

    Kristopher
  • Hacp - Thursday, August 25, 2005 - link

    I think I like the Picture finder. Do they have things like what kind of background the picture has, black/white, etc...

Log in

Don't have an account? Sign up now