Original Link: https://www.anandtech.com/show/1670
WinHEC 2005: Coverage Wrap
by Derek Wilson & Jarred Walton on April 29, 2005 2:00 PM EST- Posted in
- Trade Shows
Introduction
And yet again we've saved the best for last. This article takes everything we've missed over the past week and rolls it together in to one giant Microsoft fest. We've got the expo floor, more Longhorn goodness, and some interesting tidbits about the next version of DirectX and why current hardware may not be able to provide "Gold" level Logo compatibility.Here's a look at a pretty cool Avalon demo we caught at the show:
Click to enlarge |
For the quick summary, Avalon will be the new programming interface for desktop graphics and will allow easy hooks into 3D hardware. This will likely be similar to the way OS X uses OpenGL to accelerate their desktop. We'll get into a bit more detail in a moment.
Beyond Avalon and graphics, there is plenty of information left to cover. We hope that you enjoy our final report on WinHEC 2005.
Media Center Edition PCs
Since October 2004, one million MCE PCs have been shipped. In contrast, the total number of MCE PCs shipped since Windows MCE first became available is only two million units. There's a reason Microsoft and their partners are pursuing multimedia applications so heavily, as they continue to see this market as an area of rapid growth. We've got some additional information on the topic we hope you'll find interesting.There were a couple key features that came up repeatedly in regards to media center devices. Besides performance and the user interface, the need to make them small and quiet was clearly a focal point. A presentation by NVIDIA discussed these conflicting goals and the balancing act that must be maintained. (We'd provide you some of the slides from the presentation, but apparently the slides were only finalized the night before, so we didn't get a copy on the WinHEC CD that was provided for attendees.)
One example given was the heat generated by a standard 7200 RPM hard drive. Without any form of active cooling (i.e. no case and/or PSU fans), HDD temperatures quickly scaled up to 50+ C. Even a small amount of airflow was able to reduce that to a more tolerable 40 C, but the problem was in the choice of components. A cheap $5 fan can do the trick, but often at a much higher noise level.
This leads to a topic of particular interest: the use of "smart" fans that can regulate speeds based on temperatures. Such components are often luxuries on a larger case, but the SFF segment frequently uses such designs. NVIDIA's own testing has confirmed what we already know: an intelligent and well-designed cooling system can often pack more components into a smaller area and still achieve better thermal and acoustic performance.
Prototype Media Devices
We've snapped pictures of some of the units that were being demonstrated at the show that were targeting the media center audience. There are of course numerous ways in which to approach such a market, but we'll be focusing here on the more unique designs. Probably half to two-thirds of the booths in the Expo Hall had a media device present, with one to many streams of HD and/or DVD content all running in real time.Texas Instruments
Click to enlarge |
Texas Instruments was demonstrating this "Split Chassis PC Product". While the main system was a standard PC connected to an HDTV, the small module you see above was connected to the main system via an X1 PCIe cable. With a bidirectional data rate of 2.5 Gbps (250 MBps data throughput, as 62.5 MBps is used for communication overhead), the connection is more than capable of handling all of the necessary traffic. The end result is that the large media processing PC can be stored out of sight and the only visible interface is this small box, roughly the size of a typical 5.25" DVD drive (2x7x10 inches). Keyboard, mouse, USB, Firewire, and audio connections are all present on the "I/O Hub", along with IDE and CardBus/PCMCIA and a flash card reader.
Texas Instruments feels this provides a compelling solution, with the PCIe X1 cable providing up to 7 meters of distance between the main PC and the I/O Hub. The PC still needs to be close enough to connect to your HDTV, of course, which is a limitation of the DVI or component video cables that are used. There are certainly some interesting use cases for the design, i.e. a console located in a unit next to your sofa with the noisy PC placed where it can be neither seen nor heard. Couch potatoes of the world will rejoice at the ability to change a DVD without ever standing up!
Embedded Device Processors
While the Athlon and Pentium processors receive the lion's share of attention these days, there are still many alternative processors that can be used for media devices. Intel has the XScale architecture, which is non-x86 - it's an extension of the ARM5TE architecture. There may have been some devices using this processor, but we didn't notice any. Moving on, AMD has their Geode GX and NX processors, VIA has the Eden CPU as well as a new Luke processor, and Transmeta has their Efficeon processors. We've got images of some prototypes for all of these platforms.AMD Geode
Click to enlarge |
Starting with AMD, the Geode processor is designed as a low power embedded solution. The GX model shown above-left consumes a miniscule 1.1W at 533 MHz. For video content, a hardware decode/encode chip would be required for most formats. The NX is a higher performance version, as well as a higher power device. We didn't get a clear shot of the screen, but DVD quality video was being decoded without problems from the system on the right. Without some additional hardware assistance, HD content would probably have difficulty, however.
Transmeta Efficeon
Click to enlarge |
Our next stop was at the Transmeta booth, where they were demonstrating their 100% silent media center sample. You can see the internal components as well as their exterior. According to the booth personnel, the sample unit was designed and built for the show in only a few months using mostly off-the-shelf hardware. A couple of the more interesting aspects are the use of a dual-tuner card (that's the large add-in card you see on the left of the image), allowing for recording one channel while a second is being watched. We're not convinced about the need to handle 4+ simultaneous streams, but there is definitely a good use case for handling at least two streams.
It's also interesting to note that a Mobility Radeon 9700 was used in their prototype, which is probably a lot more power than most companies would put into a media center device. On the other hand, that does give them the ability to handle Longhorn (with the Aero Glass UI) once it ships. With no fans on the system, it should be totally silent (other than perhaps the DVD drive when it's in use) - not that we could verify this over the crowd noise. However, we're a bit concerned about long-term reliability, and a single temperature sensitive, low-RPM fan might be a useful addition. This would help to guarantee that the HDD and other components are kept cool, helping to increase their lifespan. This can be particularly important during the summer months.
VIA Eden and Luke
Click to enlarge |
Our last examples for media center devices come from VIA. The Eden processor has been around for quite some time, but this was the first we had heard of the Luke processor. Basically, it was their North Bridge solution with the Eden integrated - and we say it that way rather than the Eden with an integrated NB as the NB is roughly 3X the size of the CPU! The whole board is covered by a passive heat sink, but you'll note the small size of the board: 10cm x 10.5cm; it's not much larger than the attached 1.8" HDD. While the Luke setup was only shown powering XP, a nearby Nano platform was shown decoding 720p HDTV without trouble. (The photo is blurry due to our camera and photography skills, not the LCD or platform.) Via touts the Nano platform as being the "most highly integrated platform for small digital entertainment devices," and at 12cm x 12cm it certainly isn't large. Now they just need to clean up their wiring....
VIA SMP Eden
Click to enlarge |
We didn't get a shot of the working prototype system (well, we did but it was so blurry that it was unusable), but we did get some photos of the board and specs for VIA's take on the SMP market. That's not how they described it, but we thought the dual-processor Eden setup was rather intriguing. At 17cm x 17cm, it's larger than their other demo platforms, but it also includes quite a few more features. Besides the two Eden CPUs, it has GbE along with a second 10/100 Ethernet port, two DIMM slots, 2 x SATA and 1 x IDE, and the CN400 NB (integrated UniChrome Pro graphics, hardware support for MPEG2 decoding, and MPEG4 acceleration). It's basically the same as the Nano platform but with SMP and a larger board. Two Eden chips aren't going to set any performance records, but then the target market isn't really about raw performance. Anyway, we thought it was pretty cool, considering the small size of the CPU connections and the low heat output. Maybe a Luke-II CPU can incorporate both cores with the chipset?
SaintSong
Here's a final shot of a couple small units from a company called SaintSong that were in the hardware showcase. We're not even sure what the specs are, but they're just barely thicker than a typical 5.25" DVD-RW drive while not being quite as long. Assuming the price, performance and features can at least match the Mac Mini - with perhaps better media support - we're sure there will be some interested buyers.
Just to reiterate, these are all prototype or reference designs at best. Interested vendors could add or remove features from any of these platforms in order to meet the demands of their target market. We may never see any of these specific units for sale, but the concepts presented could easily show up in next generation cable/set-top boxes. Microsoft's vision of a digital home is going to require a lot more than large ATX-size PCs, and these smaller devices are meant to fill those roles.
Avalon and Windows Graphics Foundation 2.0
Avalon is the new face of windows graphics and promises to bring new and wonderful capabilities to desktop applications. Having worked with windows GDI, we can absolutely say that anything would be a step up, and we hope that Avalon will deliver on its promises.The idea is that Avalon sits on top of the Windows Graphics Foundation (WGF is the new name for DirectX), and will allow easy access to advanced 2D and 3D features of modern hardware on the desktop. The Aero Glass demo shown at the keynote used pixel shaders to render semi-transparent borders that blurred whatever sat behind the window. This is just one example of what can be done with Avalon, and we hope that application developers will find new and better ways to improve the actual interface rather than just adding eye candy. One of the Avalon demos we saw included an application to organize images in a folder. The images were arranged in a cylinder with the view at the center. It was a pretty neat little demo of what is possible on the desktop with the new technology.
Avalon is part of the WinFX package that will be released for Windows XP as well as Longhorn. This will allow developers to quickly adopt new features without needing to worry about legacy support for Windows XP users. Of course, it removes the new advanced UI, remote, and filesystem technology from the list of reasons to upgrade when Longhorn comes along.
Metadata from the ground up is a theme of Longhorn's UI. Files will have icons that are rendered from the actual data and can be zoomed at will. This means that your documents, spreadsheets and presentations will show the actual text charts graphs and slides in them. Microsoft is even making it easier for proprietary files to be included in metadata by allowing vendors to include very limited decode functionality without anything else (this way we will even be able to view .RAW and other interesting file formats via the OS).
On top of that, the Windows Graphics Foundation already essentially exists on Windows XP. WGF 1.0 is also known as DirectX 9.L and will add a couple new features beyond DirectX 9.0c. These new features include: cross-process shared surfaces, managed graphics memory (and virtualized graphics memory), prioritization of reasources, text antialiasing, advanced gamma functions, and device removed (in order to gracefully recover from a failure the hardware can be "removed" and then "added"). The device removed feature will also be used for hot replace/add features. These are the basic advancements in the graphical interface of Longhorn.
Microsoft is going to make it a point to not break existing applications with Longhorn. All functions of older APIs will be mapped to DX9 functions. It will be interesting to see if there is any performance improvement here because Microsoft is planning on mapping older fixed function features to vertex and pixel shader programs. But the really interesting part of Longhorn (at least to graphics nuts) will be Windows Graphics Foundation 2.0.
In order to take advantage of all the advanced features of Longhorn, hardware will need to support WGF 2.0. Microsoft is currently considering making WGF 2.0 support a requirement for Gold Logo certification (meaning that it would be impossible to buy a computer system today that is fully Gold Logo). But we can't make any hard and fast statements about either WGF 2.0 or the Gold Logo program as neither is finalized.
We do know a little bit about the upcoming next step in the DirectX line. Aside from the new features in WGF 1.0, we will also see some new pipeline stages. Geometry shading will allow programmers to work with whole primitives at a time (where vertex and pixel shaders only allow working with one pixel or vertex). This will enable such things as easily generating per-primitive data for pixel shaders, easier handling of normals and colors, and some manipulation of the primitive (extrusion/expansion).
The fact that DX10 (or WGF2.0) hardware will need to allow stream output from the middle of the pipeline (and feedback to the front of the pipeline with predication) means much more flexibility for developers. For example, developers could use this feature to create a fast, efficient way to generate dynamic reflection, environment, or shadow maps (especially when combined with the new geometry shader). And vertex and geometry shader textures will be completely supported as well.
We can also expect Microsoft to come through with a much more detailed and stringent specification for WGF 2.0. It seems that the extent to which DX9 hardware can vary has gotten their attention. In one of the sessions we attended, it was stressed that Microsoft doesn't want software developers to have to cater to multiple hardware paths in order to get good performance. In fact, they went so far as to say that they wanted a tight enough spec so that WGF 2.0 hardware would all support the same features implemented in the same way. Foreshadowing this, all capability bits have been eliminated from WGF 2.0 (meaning that either hardware will or will not support exactly the same set of functions as all other WGF 2.0 hardware). Vendor specific extensions will still be accessible through OpenGL, and we can expect the top graphics IHVs to try and differentiate themselves somehow. It may just become more of an AMD/Intel type of race where the differences come down to the underlying hardware architecture and technology.
Aside from all of this, a feature of the LDDM (Longhorn Display Driver Model) that we find really interesting (aside from windows managed virtualized memory) is that Microsoft wants to provide a scheduler for the GPU in order to have multiple graphics programs running on a single piece of hardware at the same time. At first, this will be support by simply batching together requests from different processes and scheduling them, but the future will be more complex. Hardware makers will need to put some effort into providing for context switching in graphics hardware. It will be interesting to see how efficiently this is implemented, as it could generate quite a bit of hardware overhead.
That's about the extent of the current knowledge we have on WGF 2.0 and the LDDM. To be sure, we will bring you more information as we get a hold of it.
Longhorn Images
We've already talked about Longhorn, but we really neglected to show any new shots of the platform. Taking photos of a presentation isn't the best source for images, but we figured some of you might want to see the current version. The short description would be to take XP and install something like Window Blinds, and you have a very similar appearance to the Longhorn demonstration. That doesn't cover the changes that are happening under the surface, though - and there really are quite a few changes. First, here are the requisite pictures.Click to Enlarge |
If you're not shocked and amazed by the new look of Longhorn, don't worry - you're not alone. Some of the stuff like the vectorized graphics can be useful, but it's been done elsewhere already. It should end up adding some pizzazz to the Windows UI, and hopefully the revamped display engine will improve performance. We won't really know the full impact that Longhorn will have until it ships in about two years. We did get several discs with builds of the current Longhorn versions, so we may take a closer look at how the OS is shaping up in the future. Since it's still a ways off, it isn't one of our top priorities.
Note: You may have heard all the brouhaha about the posting of Longhorn screen shots. We figure since these are photos of the demonstration, and there was no restriction we're aware of pertaining to the posting of such images, we should be okay. Unfortunately, due to the angle and the conditions in which the photos were taken, they're pretty blurry. We'll hear from Microsoft if they have a problem with these images, of course.
Longhorn Manufacturing and Installation Features
One of the sessions we attended covered the OEM/business side of Longhorn preparations. Home users don't often think about these issues, but for a company like Dell, installing and configuring the OS on millions of PCs is a critical matter. Some of the basic problems with deployment of Windows are summarized in the following slide.If none of those items sound particularly important, then you probably haven't dealt with the deployment of a large number of PCs. Hardware dependencies, languages, etc. are all matters that have to be dealt with on a typical assembly line. Longhorn comes with quite a few changes to help their OEM and business partners.
Several of the bullets deal with the management of system images. While ghost utilities have been used in the past, Longhorn will include its own imaging system. The underlying OS has also been componentized in order to facilitate deployment. For example, languages are now an entirely separate component; a single "world wide binary" is used for executables and it simply plugs into whatever language pack(s) the user has installed. The imaging system appeared to be very robust, as only the OS' developer could make it.
The Windows Images (WIM) can contain multiple configurations, and a single file called "unattend.xml" will contain all of the settings for a specific installation. So, rather than having a separate image for Home, Pro, and Media Center, with further images for each system platform, not to mention the various languages... you can get by with a single image! There will still be separate images for x86-32 and x86-64, however, and it wasn't entirely clear (to us) at what point a separate image might be required for different systems. Still, instead of dozens or even hundreds of disk images, the process can be reduced to perhaps two images per machine platform.
Most of this is accomplished using a new tool called Setup Manager. Images can be created, and modifying them can be done by opening the image and adding or deleting files, or simply by dragging and dropping packages onto an existing image. (All of this can alternatively be done at the client level if desired, but that's basically the current way of doing things, so it's nothing new.) If everything goes as planned, an online ordering process could also be integrated with the new imaging tools to create an on-the-fly unattend.xml file for a specific machine. Then on the assembly floor once the hardware is built, the image and configuration file are used to setup the OS and software for a custom PC. The OEM partners at the session looked very interested in the new features, although it's certainly not going to have as big of an impact for home users.
An interesting point that was made during this presentation (and covered elsewhere) is that DOS is going away. We'll still have a command prompt within Windows, but the DOS environment is basically dead. Consider the many businesses and OEMs that currently use DOS to load up a tool like Ghost, along with network drivers for specific platforms. A lot of new hardware - particularly SATA and USB devices - will not be properly supported by DOS. DOS will hit the end of the road on 12/31/05 as far as Microsoft is concerned, and there will be no way to use a DOS executable to start the Windows setup process. What's needed is a new tool to accomplish this task.
Enter Windows PE (Pre-installation Environment), basically a stripped down version of Windows with reduced API support. Like regular Windows, it can be customized to fit the need; it can even be turned back into the full Windows environment if desired, although that's counter productive. (It also has a forced reboot that occurs every 24 hours.) WinPE can easily fit on USB flash devices, coming in at under 64MB for the minimum configuration, although 128MB flash devices may be more appropriate for customized toolsets. Network support among other features will be available, and the same drivers as regular Windows can be used. This should make things easier for driver teams as well as the system builders - at least, that's the plan. We certainly won't miss DOS much - R.I.P. MS-DOS, 2005. Incidentally, WinPE is available for use now (or very soon), and it will work with XP and Server 2003 while we wait on the release of Longhorn.
A few last comments on Longhorn information we picked up at the show. First, Longhorn should install much faster than XP, which is welcome news. The goal is for Longhorn to be installed and ready to use within 15 to 20 minutes of inserting the CD/DVD. DVD? That's right! It's not official yet, but we heard the comment several times that based on the current size of the Longhorn build, DVDs will probably be used. We hope so, as installing from a single DVD sounds a lot better than 3 to 5 CDs. (The current Longhorn image is over 900MB and still missing many drivers, so DVDs really make sense.) The installation process will be faster in part due to less file copying; the image will be extracted to the hard drive, and then enumeration of PnP devices will occur and you're done. With even high-end systems still taking 45 minutes to an hour to complete the XP installation, the thought of getting everything done in less than half that amount of time is definitely appealing.
More from the Expo Floor
While the show floor certainly wasn't as big as what we're accustomed to seeing at E3 or some of the other shows, there were still a few other items that we found interesting. There isn't really a central theme to this, so just pretend you're walking around the exhibition floor with us bouncing from networking to PCI Express to servers, all the while passing LCD displays showing DVD and HD content. And just for the record, CRTs are basically dead. There wasn't a single CRT present at WinHEC, and no one is interested in trying to improve the technology. Even if you love your current 22" CRT, we expect by the time you're ready to upgrade, any CRTs will be at least as old as what you're already using. That tangent aside, here's some of the other exhibits we saw on the floor.Alienware Gaming Zone
There was a large gaming arena set up in one corner with 30+ Alienware systems. All of the systems were running XP-64 with a 64-bit build of Far Cry. System hardware consisted of Athlon 64 4000+ CPUs with 1GB of RAM and GeForce 6800 Ultra graphics cards. The two game servers were running SLI with (we think) 2GB of RAM in spectator mode. We couldn't really see any difference between the 64-bit version of Far Cry and the "old" 32-bit version, but we were told that it was "substantially faster". Hopefully, we can get the test version back into our labs to validate - or disprove - these claims. We didn't bother participating in the competition where the winner walked away with some hot hardware, but [*cough*] we figure it's nice to let someone else win now and then.
Intel's Dual Core Itanium (Montecito)
Click to enlarge |
As far as we're aware, this is the first demonstration of Montecito running at full clock speeds (2.0 GHz). We might have that wrong - on both the clock speed and the "first" claim - but whatever the case, Intel did have a Montecito server present. Just to recap, the Itanium 2 Montecito chip is a dual core processor with SMT/HyperThreading on each core. It tips the scales at a mere 1.72 billion transistors, nearly fourteen times the size of the Prescott core (and made using the same 90nm manufacturing process). Of course, most of those transistors go to the 24MB L3 cache - we'd guess somewhere around 1.4 billion transistors just for the L3 cache and logic, if not more. Running in a dual socket motherboard, the result is eight virtual processors, as you can see in the following screenshot.
Intel's Pelston and Foxton Technologies
Click to enlarge |
The server was running the Itanium version of Windows (we're not sure if it was XP-64 or the older Windows for IA-64), but that wasn't really important. What was important was that the chips were up and running and Intel had a demonstration of some of the new technologies for the platform. One of the new technologies present in Montecito is called Pelston. The Pelston technology, if you haven't heard of it, is a way for Intel to increase the yield and reliability of the Montecito processors. With the massive L3 cache, it's pretty much inevitable that there will be defects. Pelston allows the processor to detect cache errors and deactivate the faulty cache lines. The "MCA Viewer" application in the above screenshot shows the results after a second system simulates the failure of various cache lines - 10 so far in the example. Those of you who might worry about the loss of performance with decreased amounts of cache, have no fear: Intel will replace chips that develop more than 90 faulty cache lines (less than 0.1% of the total L3 cache).
Another technology present in Montecito that we didn't see demonstrated is called Foxton. Foxton allows for dynamic overclocking of the processors if certain thermal and load conditions are met. With a sufficient cooling setup, the processors can run at up to 2.2 GHz, at least periodically. That's only a 10% overclock, and it's not even guaranteed, but it's interesting for an Enterprise Class server chip nonetheless. Again, Intel reiterated that Itanium is not a dead technology. They further elaborated that "there are technologies present in the Itanium platform that you can't get with Xeon." The real competition for Itanium 2 comes from IBM POWER5 and Sun ULTRASparc, and with a cost of around $5000 per Montecito processor, Intel is actually something of a "bargain" by comparison.
AMD's Quad Opteron
While we're on the topic of servers, AMD had the above Quad Opteron system running with 16GB of RAM (or possibly 32GB - we couldn't confirm the exact amount). It wasn't doing anything particularly useful at the time, and we couldn't determine if the system was using the new dual core Opteron chips or not. If it was, a "wrestling match" between it and the earlier Montecito server might have proved interesting. It might be difficult to setup an apples-to-apples test, however, as the Itanium does rather poorly when running x86 code and it doesn't support x86-64.
Even More from the Expo Floor
NEC PCI Express Bridge
Click to enlarge |
These are a couple of shots from the NEC booth showing off their PCI Express bridge technology. "Doctor Frankenstein, meet PCI Express...." This monstrosity is obviously not meant to reside in any system but is meant as a demonstration that the chips work properly. Pretty much every storage technology other than Firewire was present and working (as far as we could tell). Also of note was that the main bridge card used an X8 connector plugged into an X16 slot, showing the downward compatible capability of PCI Express. Meanwhile, the system was connected to a monitor running - surprise - Windows Media Player with a DVD or HD video stream.
Phoenix UEFI
Click to enlarge |
We mentioned EFI (Extensible Firmware Interface) briefly in the Day 1 coverage, and we actually saw a demonstration by Phoenix of their UEFI chip at the AMD booth. (UEFI = Universal Extensible Firmware Interface, which is the new version of EFI. Apparently Intel owns the EFI name or something.) You can see the "Hello, world!" equivalent of UEFI in the main menu, which allows booting via a standard BIOS or the new UEFI model. One of the interesting items we saw in the UEFI BIOS was a memory testing utility, along with a couple other diagnostics tools. As if that wasn't enough, it was also possible to connect to the Internet and browse the web. This was all apparently done through code running off the flash ROM in the motherboard, and we verified that the connection was real by pulling up the AnandTech web site. (We may have gained a new reader in the process.)
Sion Power Li-S
Click to enlarge |
The road warriors of the world may find this next item interesting. Sion Power had some Lithium Sulfur (Li-S) batteries at their booth. The older first generation Li-S packed 225 WHr/Kg while the second generation batteries are able to increase that to 260 WHr/Kg. Lithium Ion sits down at 135 WHR/Kg, almost half the power to weight ratio of Li-S. There are two ways to go with Li-S, of course. The batteries at the booth were showing equivalent power in a lighter package, which will be useful for the thin and light segment. For higher end laptops (or even portable workstations), Li-S could provide double the battery life at the same weight, allowing moderate laptops to literally run all day. Sion Power said that they are basically at the beta stage right now, with full production hopefully coming in the next 12 to 18 months.
Just to satisfy our own curiosity, we asked them about fuel cells. Fuel cells could apparently pack 6000 WHr/Kg, but besides being dangerous (try taking a propane canister on an airplane), they're also only about 15% efficient. That means the 6000 WHr/Kg fuel cell can only deliver around 900 WHr/Kg of usable power, although that's still 3.5 times the potential of Li-S batteries. Sion Power felt that we were probably still 5+ years away from having usable fuel cell technology for computers, so Li-S or similar technologies are going to be useful for quite a few more years.
Eurocom Portable Workstations
We also saw some workstation class DTRs from Eurocom on the floor. These beasts include upgradeable custom PCI Express graphics cards (they explained to us that both MXM and Axiom have their flaws, so they had to roll their own). They are also planning on offering SLI configurations in their notebooks. We might just have to get our hands on one of these to compare to a desktop workstation. After all, what engineer wouldn't be happier designing anywhere other than their cube?Closing Thoughts
If we were to be brutally honest, there wasn't a whole lot of new or amazing stuff shown at WinHEC. Even a first time tradeshow attendee (Jarred) noticed that there were a lot of bored looks. To quote a representative from a major hardware vendor, "I think a lot of companies just come out of respect for Microsoft." Still, after the rather lackluster Keynote presentation, there was plenty of useful information when we dug a little deeper. With over 90% of the market, Microsoft pretty much defines the future of most computers right now. If they say it's time to have DX9 level 3D graphics in Windows - whether that be on a Desktop, Laptop, Server, or Workstation PC - then it's pretty much a foregone conclusion. It will be interesting to see how the server platforms make use of 3D (or if they even make use of it).While we applaud Microsoft for bringing forth the advancements in Longhorn, we have to admit that they are long overdue. The advancements in metadata and file representation have been around in the Linux world for years, and Apple's OS X is still hands and feet above anything we've seen of MS' new look and feel. We are certainly not going to be the one to speak out against Microsoft for trying to bring their OS up to par with the rest of the world. Honestly, being the platform for which most applications are written today should place weightier expectations on Windows than other operating systems. Microsoft has a responsibility to live up to their position in the market place, and unless they make some huge unforeseen strides with Longhorn, they will simply be keeping pace rather than setting it.
One of the questions we liked to ask of the various hardware vendors was what they thought the real world use scenarios and performance benefits of 64-bit would be. Obviously, servers and workstations will benefit, as will anyone with a large amount of memory that wants to work with a large data set. Graphics artists that want to open up numerous large images at the same time will find it helpful, for example. But what about the more typical users that only have 1 GB of RAM in their system? We heard a lot of claims of "substantial performance increases," but there were very few people willing to quote a concrete number. AMD claimed that 20% was common, but several others felt 5% or so was a more realistic expectation, brought about by the additional registers. Obviously, opinions on what constitutes a substantial increase will vary, but if it's only 5%, we think most people are better off continuing with XP-32 for the present.
Speaking of AMD, we chatted briefly with Barry Crume (AMD's director of product planning for the server/workstation segment) about the future of processors. There were a couple of interesting things that came up in our conversation (and since Barry didn't say otherwise, we'll assume it's okay that we mention this). The first thing we talked about was CPU clock speeds. Where Intel in the past talked about reaching 10 GHz and beyond, it's telling that one of their sessions was entitled "Intel Enterprise Platform - Value Beyond GHz." One fact that came up is that at 5 GHz, light can only travel 6 cm per clock cycle, and electricity would be more like 3 or 4 cm. Barry said that he felt 4 to 5 GHz may be the maximum clock speed that we'll see without some fundamental changes in technology or really long pipelines. He also stated that CPU designs with 10 to 15 pipeline stages appear to be the optimal solution, although as many as 20 is perhaps reasonable. In other words, the days of 28 to 39 pipelines stages (Willamette, Northwood, and Prescott) are basically over. Intel is admitting as much with their increased focus on the Pentium M architecture.
The other interesting comment from Barry was that he stated AMD will be using DDR2 in a future processor. We had asked if they might simply skip DDR2 altogether and head straight to DDR3, and he said they would not be doing so. His explanation was that DDR2-400 didn't make any sense at all, and while DDR2-533 has some advantages, it's really only at DDR2-667 that performance benefits become useful. How useful? He stated that a 10% performance increase was likely. Whether this was just a guess or if AMD already has DDR2 prototype processors - remember that with the integrated memory controller, AMD will have to create a new version of their chips to support a new memory type like DDR2 - we don't know. We were actually surprised at his response, as we thought DDR3 was close enough that AMD would simply skip a generation. Then again, they might like the idea of allowing Intel users the ability to convert to AMD with the only required purchases being the motherboard and processor. Having a common memory platform certainly isn't bad for consumers.
At the end of the show, while we would definitely say that WinHEC doesn't have the flash or glitz of Computex, E3, or some of the other larger shows, there was still plenty of good information to be found. Hardware devices still need robust drivers to achieve maximum performance, and that's what WinHEC really focuses on. Will we go again next year? Maybe, or if not DevCon 2006 should only be a few months before the ship date of Longhorn. It's certainly useful to keep tabs on how MS views the industry. A better option, judging by recent MS release dates, is that we'll be sure to attend WinHEC 2007 for the actual launch of Longhorn. (The current official word is that Longhorn is due in mid to late 2006. However, given the slippage of XP-64 and features like WinFS that have been pushed back to a later update, we remain skeptical.)