Original Link: https://www.anandtech.com/show/1635



The last day of IDF is usually reserved for an entertaining keynote by Pat Gelsinger, but since Pat has moved on to his new role as Senior VP and GM of the new Digital Enterprise Group, someone else had to fill his shoes.

That someone else is none other than Justin Rattner, quite possibly the best person for the job. Rattner's keynote started off with a clearly nervous tone, wouldn't you be if you were on the receiving end of the Gelsinger torch?

Mannerisms aside, Rattner is an extremely capable engineer and he should have no problems fulfilling the demands of his new role. His keynote today wasn't able to top Gelsinger's past keynotes but he's on the right track to picking up where Pat left off.

The keynote was a bit long and drawn out, but in the usual style of Gelsinger, Rattner provided a good look at what is coming down the road in a segment he called Platform 2015. The idea is to be able to look at trends that will be reality in the next decade, and here are some of the more interesting points...



The Super Resolution demo

With the proliferation of broadband comes the increase in expectations for the quality of media you find on the internet. Unfortunately, not everyone has a good digital camera and not everyone has a good digital video camera. Especially with more and more cell phones coming with integrated cameras, movies made on them are usually pretty poor quality when viewed at 2x their size.

At the same time, we've all seen TV shows like 24 or CSI where someone sitting at a keyboard can simply "sharpen that up" and make even the blurriest, lowest resolution image clear enough to pick out someone's face. We all usually scoff at the idea and complain about how unrealistic things like that are, but in reality, there is some truth to what's going on.

There are a set of algorithms that look at images and perform motion analysis on a pixel basis as well as statistical analysis on a per frame basis to help enhance the resolution of an image or a movie. In today's keynote, this technique is referred to as Super Resolution (no comment).

The demo was of a loop of about 3 seconds of low resolution video of a caution sign, you can see the original video cap below:

Take on a camera phone, the video was then cleaned up using these Super Resolution techniques to result in the following:

The results were nothing short of impressive - but why the demo? It took the current generation microprocessors about 1 minute to clean up that 3 second video, to do a full length conversion on more difficult material would require around 1000x the compute power current platforms offer. Rattner used Super Resolution as an example of what multi-core CPUs by 2015 will be able to enable.



The Memory Bandwidth Challenge

By 2015 Rattner predicted that Intel CPUs would have 10s or 100s of cores on each die, which in turn would require a lot of memory bandwidth. The problem with memory bandwidth at that level is that you effectively become pin limited, you can't physically have enough pins leaving your microprocessor to allow for a wide enough memory bus delivering the sort of bandwidth necessary to feed those 10s or 100s of cores.

One solution that Rattner presented was 3D die and wafer stacking. Normally microprocessor circuits are laid out in a flat 2D surface, as the name implies 3D die and wafer stacking builds on top of that, literally.

First let's talk about wafer stacking; wafer stacking involves stacking two identically sized/shaped wafers on top of each other, and using through-silicon vias (interconnects) to connect the top wafer layer to the bottom layer. The best example of an application of this would be a DRAM wafer sitting on top of a CPU wafer, meaning that you would have memory (not cache, that would still be inside your CPU) sitting directly on top of your CPU.

With wafer stacking, instead of having hundreds or thousands of pins between your CPU and main memory, you have 1 - 10 million connections between your CPU and memory, directly increasing memory bandwidth. What's interesting is that this method of stacking could also mean the end of external memory.

Die stacking is another possibility, where you could stack multiple different sized die on top of the CPU core logic, that die could also be DRAM as well as Flash memory or anything else really. Intel showed off an 8 layer configuration using die stacking, which according to Intel is a very realistic option.

Rattner was fairly confident in the potential with die and wafer stacking, so it's a technology that we'll definitely have to keep an eye on as time goes on. There are definitely limitations to consider, such as power and thermal dissipation, but there are solutions in the works for that as well (e.g. nanoscale thermal pumps).



A Fully Virtualized Platform

Rattner also talked about fully virtualizing the platform, showing off a demo where two virtual machines both had access to Intel's integrated graphics core. It would be interesting to see this sort of technology implemented by ATI and NVIDIA in their GPUs:

A Parallel Programming Model

A major demand of extremely parallel architectures (thread level) is the need for parallel compilers and management of threads, Rattner also talked about one method of properly implementing software to take advantage of multi-core CPUs.

By using a software run-time management layer, load balancing between the multiple cores in a 2015 CPU is achieved. Individual cores can be instructed to power down based on the application's requirements, which will definitely be necessary when you're dealing with 10s or 100s of cores on a chip.

Intel demoed an example of such a management layer with a multi-core network processor, showing certain cores going to sleep when they weren't used.

Final Words

All in all Rattner's keynote was a bit long winded, but an excellent first attempt - the vision he presented of 2015 was quite realistic, and we've got a lot more to talk about from an architectural level in our next article...

Log in

Don't have an account? Sign up now