Original Link: https://www.anandtech.com/show/1632



Our first day of IDF coverage has come to a close, and it has been quite eventful. From the barrage of multicore information, to the 64bit evangelism, down to the tech showcase, we've seen quite a bit from Intel.

It is usually a pleasure to listen to Pat Gelsinger deliver a keynote, and today was no different. The Intel VP took us through new technologies from Intel that will be introduced over the next year or two. Included in the long list of topics covered are network acceleration, advanced management features, virtualization, and, of course, multicore. We got to see a 32-way Itanium machine recognize faces in a split second, as well as another demo of Intel's virtualization technology -- this time focused on security.

On the floor of the tech showcase, we saw quite a few interesting booths. There is everything from fuel cell and battery research, to new a new server oriented chip from ATI (which we will talk more about later in the week). Also appearing is the NVIDIA nForce 4 SLI Intel Edition, brining into light the fruits of the NVIDIA/Intel cross-licensing agreement announced in November. We'll be talking more about these and other exciting exhibits over the course of IDF.

We hope you enjoy the conclusion to our coverage of IDF Spring 2005 Day 1. It's time for us to collapse into dreamland and prepare for another big day tomorrow.

Intel I/O Acceleration Technology: Improving Network Performance

When looking at network servers, a huge percentage of the overall work being done has to do with protocol overhead processing. In order to get more of the work the users is interested in done, the industry has pushed forth network hardware that handles some of the overhead of TCP/IP processing. Network cards that include a TCP Offload Engine (TOE) are able to help reduce overhead processing done on the server's CPUs.

Rather than simply rely on a NIC with a TOE, Intel has found that improving the rest of the platform is actually a better way of handling network overhead (especially when looking at many small packets).

We didn't get an in depth technical look at IOAT at Pat's keynote today, but we do have a little general information. The slide we are looking at lists "CPU Improvements" and "OS Support/IA Tuned Software" as architectural enhancements behind IOAT. Making faster and more parallel processors will definitely help without needing any IOAT specific optimizations, and OS support of hardware features is usually a prerequisite for their use. Of course there could be more too these aspects of the technology, but we'll have to wait an see. The real meat likely comes in the chipset data accelerations, Data Movement Engines, and Edge Device Accelerations. We could envision enhancements to the NIC and chipset that allow for further removed overhead processing as well as optimized or prioritized movement of data through they system. We'll bring out more details as we get them.



IAMT, VT, and why should I want Virtualization?

IAMT is Intel's new Active Management Technology. They didn't go into much (any) detail on how it works, but they talked about it as a separate entity on the hardware level that is able to monitor and correct problems on the rest of the system. This enables higher levels of reliability across the Intel platform. This will likely be more interesting to the server administrator than to the desktop customer. From other descriptions of IAMT, we can speculate that IAMT consists of a custom operating system stored in hardware (similar to the BIOS) that allows secure network access and is able to interact with the rest of the system. We know that IAMT will be able to operate regardless of software or hardware state. In other words, having a hard locked computer, dead hard drive, or even being powered down (as long as there is hard power to the system) won't get in the way of IAMT working.

One of the biggest advances Intel is trying to push now is virtualization (with Intel Virtualization Technology: VT). Other than simply adding another way to utilize the parallelism the dual core push will offer, hardware virtualization will allow quite a few new usage models for personal computers.

Hardware virtualization is the ability of the platform to partition hardware and allow software to run as if it had full control of the hardware. Companies like VMware have been building software level technology that attempts to virtualize systems for quite a while, but there are many more advantages to virutalizing at the hardware level.


A version of VMware running with Intel VT support

As we have pointed out in past IDF shows, virtualization could allow one single computer to run more than one OS, even mixing Windows and Linux. Multiple people could be using a single system as if it were more than one, provided the processing power were adequate.

Aside from this perspective, Intel also sees security as a major advantage of virtualization. In the past, they have talked about maintaining a dedicated management partition running a standardized installation. This would allow companies to easily fix problems on a PC from a sort of quarantined operating system (that even the computer's everyday user would not have access to).

In order to show the advantage of this, Pat demonstrated what would happen if a computer on a corporate network was infected with a virus. In this demo, the IAMT enabled server that the computers are connected to automatically detects the virus and disconnects the machine from the network. If this happens on a non-virtualized computer, the user must run a virus scanner locally. On the virtualized hardware, Intel is able to disconnect the infected partition and enable the management partition to scan and clean the system. After the management partition has cleaned the virus, the user's partition is able to reconnect to the network virus free. The demo showed all of this happening in the time it took Pat to explain what was going on.



Intel and Microsoft on 64-bit - “It’s Time”

In the biggest slap AMD’s face has received in quite a while, Pat Gelsinger devoted a good bit of voice time to Intel’s move to 64-bit from the “data center to the desktop.”  It sounded a lot like AMD’s spiel about x86-64 from a couple of years ago, but this time it was Intel saying it and commanding that it happen. 

The chart below shows Intel’s aggressive ramping strategy for bringing EMT64 to desktops and servers. 

The percentage market penetration that Intel is talking about here is nothing AMD could ever have dreamed of in their early days of AMD64.  Yet, as everyone predicted long ago, Intel let AMD do all of the dirty work of getting people interested in why 64-bit was necessary, then swooped right in at the end to reap the benefits and do so bigger, and faster than AMD.  Pat spent basically no time explaining why 64-bit was important, and all of the time explaining how quickly Intel was going to implement it and how the time was now.

Microsoft’s Jim Allchin came on stage and echoed Gelsinger’s statements with the simple line “it’s time.”  Other than the mobile platforms, there wasn’t a single platform on display at IDF that wasn’t 64-bit enabled - a huge change from the way things used to be. 

When AMD first started talking about 64-bit on the desktop Intel said that it wouldn’t be needed on the desktop for another 3 years, looks like they were right.

Microsoft will be shipping final versions of Windows Server 2003 x64 Edition and Windows XP Professional x64 Edition sometime next month.  This slide below talks about some of the performance gains they’ve seen internally due to 64-bit support (thanks to the added registers and greater memory addressability):



Parallelism, Parallelism, Parallelism

Pat has been on a parallelism kick for a long time now, when AMD was talking about 64-bit and on-die memory controllers, all Pat was interested in was parallelism, more specifically, thread level parallelism.  It’s taken a few years but Pat is finally seeing his exciting technology come to fruition given that next year Intel expects 80% of its total shipments to be multi-core enabled. 

Pat put together an interesting graph showing the performance boost we’ve seen during the lifetime of the Pentium 4 from inception to present day, showing that the current generation Pentium 4 is about 3x the speed of the first Pentium 4:

Assuming heavy adoption of multithreaded software and workloads, this is where Pat sees performance going in the future due to multi-core:

It’s a very optimistic chart, but Pat Gelsinger isn’t one to exaggerate.  Given that the entire industry is behind multi-core CPUs, it wouldn’t be too far fetched to see significant performance gains due to multi-core between now and the end of the decade.

We mentioned before that Intel predicted 8-thread CPUs by the end of the decade, but now we have some more information about that prediction thanks to Pat’s presentation:

In mainstream client PCs, Intel will be shipping CPUs capable of executing two simultaneous threads in 2005; these will be the Pentium D (dual core) and the regular Pentium 4 with Hyper Threading (2 logical processors).  Note that Intel isn’t including the Pentium Extreme Edition in this figure which is able to execute 4 threads simultaneously, as it is not a mainstream client processor.  On the server side Intel is targeting 8 threads per CPU in 2005/2006. 

Towards the end of the decade, Intel is pushing for 8 threads on clients - most likely meaning that mainstream PCs will have four independent cores, each with Hyper Threading technology (or 8 simpler cores instead).  And on the server, we’re looking at 32 threads per chip by the end of the decade. 



Intel Xeon - Last to Dual Core

While AMD is bringing dual core to the workstation and server markets first (pin compatible with current Socket-940 motherboards), Intel is not migrating Xeon to dual core just yet.  Instead Intel will bring Itanium 2 (Montecito) and the Pentium D to market first as their dual core offerings in 90nm, and won’t offer a dual core Xeon until 2006 in a 65nm version. 

Dual core will really get interesting at 65nm, as Intel will have dual core solutions in servers, workstations, desktops and mobile systems by the end of 1H 2006.



Dual Core Die Shots - Let’s Zoom In

Pat’s presentation gave us the first die shots of the Pentium D and Pressler, unfortunately they were fairly zoomed out.  But with a bit of photoshop we can see why Intel split the two cores apart in Pressler.

As we’ve reported before, Pressler is made up of two distinct pieces of silicon, each one similar to the present day Prescott 2M core (with some added features).  In contrast, the Pentium D is made up of one piece of silicon that is basically two Prescott 1M cores “glued” together.  Looking at the chart above, you can see that the Pentium D  (in the 90nm next generation column) is a pretty big chip.  So in Pressler, the two cores are split apart while still giving you two cores on a single chip. 


A single piece of silicon makes up the Pentium D


Two pieces of silicon make up Pressler

The two cores in Pressler are totally independent, meaning that they must communicate with each other over the external front side bus and not over any internal bus.  So while the chip has 4MB of L2 cache on-die, 4MB of data from one thread can’t be in cache at one time. 



More FBDIMM Talk

During the keynote Pat breezed through some benefit charts of FBDIMM in the server world.  For those of you that aren’t familiar, FBDIMM employs a serial interface based on the PCI Express bus to connect individual FBDIMMs to one another and eventually to the memory controller.  The point of the FBDIMM spec is to allow for higher performance memory controllers while at the same time increasing the number of memory slots on a motherboard.

 

Generally speaking, the higher the frequency of your memory bus, the fewer memory slots you can have on a motherboard.  This isn’t normally a problem for desktop boards since you rarely need more than 2 - 4 slots, but on a server platform with 16+ slots it can be an issue.  You shouldn’t have to trade memory speed for capacity, and FBDIMM attempts to ensure that.

Kingston also had FBDIMM samples on display:



NVIDIA Demos nForce4 with SLI for Intel platforms

Despite rumors of it being called nForce5, NVIDIA showcased their nForce4 for Intel chipsets at IDF. 

The chipset is up and running:

We didn't find out much about the chipset at NVIDIA's keynote, but we later got a full demo of the chipset and some shots of vendor boards:

We can't say too much about the chipset, but given that it's still being called a nForce4 you can pretty much guess almost all the features it will support. 



The Intel platform chipset is a two chip solution featuring the nForce4 SPP:

and the nForce4 MCP:



Very Little BTX

Intel has pushed BTX considerably in past years, but this time around we saw very little on the BTX front - possibly because an unwillingness of motherboard and case manufacturers to adopt the standard at this point. 

The HTPC BTX box from Barrett's keynote was present at the showcase:

...as was the other BTX box:

The BTX designs showcased by Intel continue to be fairly lackluster, although we did like the HTPC style case.



Rambus XDR in DIMM Form

Rambus had a functional demo of their first XDR XDIMMs, instead of the board-soldered XDR devices they’ve shown off in the past.

You still retain full bandwidth regardless of how many XDIMM slots are populated in this design.

Although the shot is a little blurry, Flex Phase is responsible for the lack of any trace length matching on the board itself:

Final Words

All that in the first day of IDF, tomorrow should be all about mobile and we'll be here bringing it to you so check back.

Log in

Don't have an account? Sign up now