Original Link: https://www.anandtech.com/show/12220/how-to-make-8th-gen-more-complex-intel-core-with-radeon-rx-vega-m-graphics-launched
Intel Core with Radeon RX Vega M Graphics Launched: HP, Dell, and Intel NUC
by Ian Cutress on January 7, 2018 9:02 PM ESTIt’s here – one of the most unlikely partnerships in the semiconductor industry. Long-time rivals Intel and AMD, battling out of x86 dominance for over 35 years, are now co-conspirators. Take one part Intel’s high-performance 8th generation processor core, one part with AMD’s efficient Vega graphics, mix together with 4 GB of HBM memory and sprinkle in some high-performance interconnect.
The new products are officially labeled as ‘Intel 8th Generation Core with Radeon RX Vega M Graphics’, although this will be shortened to ‘Intel with Radeon Graphics’ for ease of use.
Intel with Radeon Graphics is strictly speaking a new Intel specific product, to be marketed and distributed as such. Today’s official launch builds on the platform announcement made late last year, with added additional specifications, confirmation of the feature set, and a series of new products based on Intel with Radeon Graphics are set to be announced this week at CES from Dell, HP, and Intel's own NUC. Aside from the NUC, we've been told to hold the other vendor information until their embargo times have arrived.
As we explained in our previous coverage, Intel with Radeon Graphics uses an ‘H-series’ Intel central processor and an AMD Radeon graphics processor as two distinct bits of silicon on the same package (a multi-chip package). Intel is buying the graphics processor from AMD, much like any other silicon purchase, and as a result, AMD’s involvement is strictly business. The graphics processor part is designed by AMD’s semi-custom division that deals with custom designs, such as the chips that go into Sony’s PlayStation 4 and Microsoft’s Xbox line. The graphics is connected to high-bandwidth memory, specifically the second generation HBM (known as HBM2), and the connection between the two uses Intel’s new embedded multi-die interconnect bridge technology, or EMIB for short. In other words, Intel CPU + AMD GPU + HBM2 + EMIB – acronyms are ablaze with these new chips.
The new products are an expansion of Intel’s traditional mobile product segmentation, which up until this point has been known as ‘YUH’. The new products fall under the ‘G’ name, designed to combine the high-performance of a H processor with the versatility of a U processor. The overall list of five/six segments, including the desktop and extreme segments, now looks as follows:
- Y – SoCs up to 4.5W, for portable fanless designs and 2-in-1s
- U – SoCs around 15W, for sleek high-performance lifestyle products
- H – CPUs around 35W/45W, offering more cores for larger enthusiast notebooks
- G – CPUs with additional graphics*, up to 100W
- S – Desktop processors
- X – A special segment for high-end desktops based on server processors
*May or may not have a separate graphics chip
The G-class Intel with Radeon Graphics processors being launched today are bigger than the standard processors that we would normally expect for mobile platforms, coming in with a TDP of 65W and 100W, although how power is managed between the two is part of Intel’s story with the new processors.
Without further ado, the specifications for the new parts are as follows:
Intel's 8th Generation Core Processors with Radeon RX Vega M Graphics |
|||||
Core i7-8809G |
Core i7-8709G |
Core i7-8706G |
Core i7-8705G |
Core i5-8305G |
|
CPU uArch | Kaby Lake | Kaby Lake | |||
On-Package Graphics (pGPU) |
Radeon RX Vega M GH | Radeon RX Vega M GL | |||
CPU Cores | 4 / 8 | 4 / 8 | |||
CPU Base Freq | 3.1 GHz | 3.1 GHz | 3.1 GHz | 3.1 GHz | 2.8 GHz |
CPU Turbo Freq | 4.2 GHz | 4.1 GHz | 4.1 GHz | 4.1 GHz | 3.8 GHz |
pGPU Compute Units | 24 CUs (1536 SPs) | 20 CUs (1280 SPs) | |||
pGPU Frequency | 1063 MHz (1190 MHz Boost) |
931 MHz (1101 MHz Boost) |
|||
pGPU Pixels/Clock | 64 | 32 | |||
CPU PCIe Lanes | 8 to Radeon Graphics 8 for Other Use |
8 to Radeon Graphics 8 for Other Use |
|||
Chipset + PCIe | 200-series derivative? 14-20 PCIe 3.0 Lanes |
200-series derivative? 14-20 PCIe 3.0 Lanes |
|||
Package TDP | 100 W | 65 W | |||
HBM2 Capacity | 4 GB | 4 GB | |||
HBM2 Frequency | 800 MHz | 700 MHz | |||
HBM2 Bit-Width | 1024-bit | 1024-bit | |||
HBM2 Bandwidth | 204.8 GB/s | 179.2 GB/s | |||
Quoted SP Perf | Up to 3.7 TFLOPS (SP) | Up to 2.6 TFLOPS (SP) | |||
Intel HD Graphics | HD 630 | HD 630 | |||
iGPU Frequency | Up to 1100 MHz | Up to 1100 MHz | |||
DRAM | Dual Channel, DDR4-2400 | Dual Channel, DDR4-2400 | |||
ECC | No | No | |||
vPro | No | No | Yes | No | No |
Overclocking | CPU, iGPU, pGPU, HBM | No | No | No | No |
*Added Core i7-8706G on 1/8
Each of the new parts is a quad-core design using HyperThreading, with Intel’s HD 630 GT2 graphics as the traditional ‘integrated’ low power graphics (iGPU) for video playback and QuickSync. This is connected via eight PCIe 3.0 lanes to the ‘package’ graphics (pGPU) chip, the Radeon RX Vega M, leaving 8 PCIe 3.0 lanes from the CPU to use for other functionality (GPU, FPGA, RAID controller, Thunderbolt 3, 10 Gigabit Ethernet).
On the Radeon RX Vega M graphics, there is one version of the silicon for which there will be two variants: the ‘GH’ will be the highest part (H for high) with 24 compute units (1536 streaming processors), 64 pixels per clock, running at a base frequency of 1063 MHz and can boost up to 1190 MHz. The second variant is the GL part (L for low), with 20 compute units (1280 streaming processors), 32 pixels per clock, running at a base frequency of 931 MHz and can boost up to 1011 MHz.
Both pGPU Radeon variants will have access to 4GB of HBM2 as an immediate graphics memory (this is not available to the CPU as DRAM). With one stack of HBM2, this means a 1024-bit bus width, which for the GH parts runs at 800 MHz for 208.6 GB/s bandwidth, or for the GL parts runs at 700 MHz for 179.6 GB/s bandwidth. Intel likes to stress that this bandwidth is only possible due to its EMIB technology, as currently all non-Intel HBM2 products available require an expensive bulky interposer that adds z-height: by using EMIB, Intel claims, the performance is on par with an interposer without the added bulk or most of the cost.
Only the Core i7-8809G, the highest placed processor, will offer overclocking. Intel has defined this chip as able to overclock the processor core, the integrated graphics frequency, the package graphics frequency, and the high bandwidth memory frequency (as well as system level DRAM). Overclocking will be dependent on how the vendor implements the feature set and software: nominally Intel is stating that in Windows 10, the CPU/iGPU can be overclocked through Intel XTU software, while the pGPU and HBM2 will need AMD’s Radeon Wattman software. It is not clear if/when an Intel-specific version of the latter will be distributed, given that when asked where the driver/software stack for the Radeon graphics should come from, we were told ‘all the software will come from Intel, consider this a fully-enabled Intel processor’.
We confirmed with Intel that all the processors support VT-x and VT-d, and that none of the processors support ECC. Only the i7-8706G is vPro enabled.
So Why Two Sets of Graphics?
To answer this question, there can be a few possible answers.
The cynical approach is to say that Intel is rehashing a H-series design for the CPU portion, so rather than spending money to make masks that cuts it off, Intel is being cheap for what would be a low volume product.
A technical reason, which readers may or may not agree with, is to do with functionality and power. Despite these chips being 65W and 100W, we are going to see them being used in 15-inch and 17-inch high-end devices, where design is a lifestyle choice but also battery life is a factor to consider. For doing relatively simple tasks, such as video decoding or using eDP, firing up a big bulky graphics core with HBM2 is going to drain the batteries a lot faster. By remaining on the Intel HD graphics, users can still have access to those low power situations while the Radeon graphics and HBM2 are switched off. There is also the case for Intel’s QuickSync, which can be used in preference of AMD encoders in a power-restricted scenario.
The Radeon graphics in this case offers power-gating at the compute-unit level, allowing the system to adjust power as needed or is available. It provides an additional six displays up to 4K with the Intel HD graphics that has three, giving a total of nine outputs. The Radeon Graphics supports DisplayPort 1.4 with HDR and HDMI 2.0b with HDR10 support, along with FreeSync/FreeSync2. As a result, when the graphics output changes from Intel HD Graphics to Radeon graphics, users will have access to FreeSync, as well as enough displays to shake a stick at (if the device has all the outputs).
Users that want these new Intel with Radeon Graphics chips in desktop-class systems, might not find much use for the Intel HD graphics. But, for anything mobile or power related, and, especially for anything multimedia related, it makes sense to take advantage of the Intel iGPU.
Navigating Power: Intel’s Dynamic Tuning Technology
In the past few years, Intel has introduced a number of energy saving features, including advanced speed states, SpeedShift to eliminate high-frequency power drain, and thermal balancing acts to allow OEMs like Dell and HP to be able to configure total power draw as a function of CPU power requests, skin temperature, the orientation of the device, and the current capability of the power delivery system. As part of the announcement today, Intel has plugged a gap in that power knowledge when a discrete-class graphics processor is in play.
The way Intel explains it, OEMs that used separate CPUs and GPUs in a mobile device would design around a System Design Point (SDP) rather than a combined Thermal Design Power (TDP). OEMs would have to manage how that power was distributed – they would have to decide that if the GPU was on 100% and the SDP was reached, how the CPU and GPU would react if the CPU requested more performance.
Intel’s ‘new’ feature, Intel Dynamic Tuning, leverages the fact that Intel can now control the power delivery mechanism of both the combined package, and distribute power to the CPU and pGPU as required. This leverages how Intel approached the CPU in response to outside factors – by using system information, the power management can be shared to maintain minimum performance levels and ultimately save power.
If that sounds a bit wishy-washy, it is because it is. Intel’s spokespersons during our briefing were heralding this as a great way to design a notebook, but failed to go into any sort of detail as to how the mechanism works, leaving it as a black box for consumers. They quoted that a design aiming at 62.5W SDP could have Intel Dynamic Tuning enabled and be considered a 45W device, and by managing the power they could also increase gaming efficiency up to 18% more frames per watt.
One of the big questions we had when Intel first starting discussing these new parts is how the system deals with power requests. At the time, AMD had just explained in substantial detail its methodology for Ryzen Mobile, with the CPU and GPU in the same piece of silicon, so it was a fresh topic in mind. When questioned, Intel wanted to wait until the official launch to discuss the power in more detail, but unfortunately all we ended up with was a high-level overview and a non-answer to a misunderstood question in the press-briefing Q&A.
We’re hoping that Intel does a workshop on the underlying technology and algorithms here, as it would help shine a light on how future Intel with Radeon designs are implementing their power budgets for a given cooling strategy.
Intel’s Performance Numbers
*Disclaimer: All performance numbers in this section are from Intel and have not been independently verified
On the face of it, this new product is a 7th Generation H-Series CPU combined with a mid-range RX 560-570 class graphics chip, albeit paired with super-fast memory, but with an overall power budget that can cap performance. As a result, pure CPU workloads are not going to change from a Kaby Lake-H series processor. What will change is anything that needs graphics – moving up from the standard HD graphics to something discrete class offers performance for gaming and for OpenCL accelerated applications.
If you have heard that line before, it is because this is the exact way that AMD has promoted its combined CPU/GPU offerings in recent generations: get the benefits of high-performance graphics at a lower cost. Ultimately the biggest issue with the AMD devices based on Carrizo and prior (we haven’t tested Ryzen Mobile, yet) was the OEM choices to limit memory bandwidth that crippled gaming, the poor devices on offer, and the efficiency of the first few generations of parts.
What Intel has produced here is something that sits between AMD’s APUs but below a full discrete graphics solution, and arguably the target market there are the mobile devices currently running Intel CPUs with NVIDIA’s mid-range GPUs, such as the MX-150 or GTX 950M/1050/1060. Intel (or Intel’s customers) clearly believe that this is a market worth going after, enough of a market to buy semi-custom graphics silicon to do so. I bet NVIDIA is really happy (judging by their enterprise and high-end GPU business, they probably are very happy).
However, in Intel’s briefings, results were given compared to systems such as those listed above. Intel also has this habit of comparing new products to 3-year old systems, because in their mind these are the users that need to upgrade. As a result there are not so many apples-to-apples comparisons here, with benefits being shown coming from CPU and GPU improvements.
Comparisons with Vega M GL
First up is a comparison between the Core i7-8509G with Vega M GL, the 20 CU graphics part, up against a 3-year old Haswell-based Core i7-4720HQ with an NVIDIA GTX 950M.
Intel with Radeon vs 3-year Old System Data from Intel, not AnandTech |
|||
i7-8505G Vega M GL |
i7-4720HQ + GTX 950M |
Improvement | |
Sysmark 2014 SE | ? | ? | 1.6x |
3DMark Time Spy Graphics Score |
? | ? | 2.3x |
3DMark 11 Graphics Score |
? | ? | 2.2x |
Vermintide 2 (Av FPS) 1080p High |
47 FPS | 15 FPS | 3.0x |
Handbrake 1.07 4K to 1080p H.264 |
7 min | 48 min | 6.7x |
Adobe Premiere Pro 1min 4K H.264 |
6.5 min | 9 min | 42% |
The big increases are in the graphics, particularly the synthetic graphics scores: 3DMark on average scored 2.2x better. It is worth noting here that Intel only quoted these metrics in terms of the graphics score, and did not include the physics score (which would have been similar) or the combined score, as it would not have given a bigger comparison number. The game being used for comparison is one I’ve never heard of being used for comparisons before, but ‘3.0X better FPS’ is listed. I would have expected Intel to come on thick with the games here, as for Windows-based systems it could be a big draw. The other numbers are content creation related, relying on OpenCL acceleration to get anywhere from 42% quicker to a 6.7x transcode speedup (it wasn’t specified if the transcode was done on Haswell QuickSync or NVIDIA).
The next comparison is more recent, a Kaby Lake-Refresh based Core i7-8550U wuth a GTX 1050 up against the same Core i7-8509G with Vega M GL, the 20 CU graphics part.
Intel with Radeon vs i7-8550U + GTX 1050 Data from Intel, not AnandTech |
|||
i7-8505G Vega M GL |
i7-8550U + GTX 1050 |
Improvement | |
3DMark 11 Graphics Score |
? | ? | 1.3x |
Hitman (Av FPS) 1080p DX12 High |
46 FPS | 33 FPS | 1.4x |
Deus Ex: MD (Av FPS) 1080p DX12 High |
36 FPS | 27 FPS | 1.3x |
Total War: Warhammer 1080p DX12 High |
47 FPS | 42 FPS | 1.1x |
The graphics score on the 3DMark synthetic quoted by Intel is 1.3x that of the system with NVIDIA graphics. As a pure graphics score, this should be a pure GPU to GPU comparison. However, the three games listed are interesting choices: all are somewhat dependent on CPU performance.
What Intel has done here is compared a Core-U class 15W Core i7 with 60W NVIDIA graphics against a new system that has a Core-H class processor and Radeon graphics that can leverage 65W independently of each other. So for games like Deus Ex, it was clear that the CPU that can draw a lot more power is going to have a lot more fun here.
It comes across as a mismatched comparison, especially if we consider the price. Intel has not disclosed the prices for the new Intel with Radeon chips, but given the fact that it requires a semi-custom chip from AMD, a stack of HBM2, and the EMIB connection, it is fair to say that it probably costs substantially more than what an OEM would pay for a 15W Core i7 and an NVIDIA GTX 1050.
The plus side for Intel is that these tests can, arguably, show the benefits of Intel’s Dynamic Tuning technology, allowing the CPU or the GPU to pull up the power that is needed.
Comparisons with Vega M GH
For the high-powered testing, again Intel pushed a 3-year old system as well as a newer NVIDIA Max-Q system against the new chips. It is clear that from the data provided, Intel wants to promote the GH systems more for gaming.
Intel with Radeon vs 3-year Old System Data from Intel, not AnandTech |
|||
i7-8809G Vega M GH |
i7-4720HQ + GTX 960M |
Improvement | |
Sysmark 2014 SE | ? | ? | 1.6x |
3DMark Time Spy Graphics Score |
? | ? | 2.4x |
3DMark 11 Graphics Score |
? | ? | 2.7x |
Hitman (Av FPS) 1080p DX12 High |
62 FPS | 22 FPS | 2.7x |
Vermintide 2 (Av FPS) 1080p High |
64 FPS | 24 FPS | 2.6x |
Total War Warhammer* 1080p High |
70 FPS | 34 FPS | 2.0x |
Rise of the Tomb Raider 1080p DX12 High |
62 FPS | 31 FPS | 2.0x |
*Total War Wahammer was run on DX12 for the 8th Gen, and DX11 for the 7th Gen.
For these benchmark numbers, Intel is really being selective with what it is showing: ideally we would see all the systems used in these benchmark runs actually running all the same tests, however that might not put the systems in the best light. We will have to do our own testing here at AnandTech to get the full picture.
For the a more recent system, Intel put the GL up against a Core i7-7700HQ with GTX 1060 Max-Q graphics solution, the new combination announced last year designed to provide gaming systems with high end graphics in nicer looking chassis with tempered power draw. This is probably the best comparison for competing designs in the market.
Intel with Radeon vs i7-7700HQ + GTX 1060 Max-Q Data from Intel, not AnandTech |
|||
i7-8809G Vega M GH |
i7-7700HQ + GTX 1060 |
Improvement | |
3DMark 11 Graphics Score |
? | ? | 1.07x |
Hitman (Av FPS) 1080p DX12 High |
62 FPS | 57 FPS | 1.07x |
Deus Ex: MD (Av FPS) 1080p DX12 High |
49 FPS | 43 FPS | 1.13x |
Total War: Warhammer 1080p DX12 High |
70 FPS | 64 FPS | 1.09x |
Not much to say here, except it would be interesting to see one of the new chips go up against a Max-Q design in the same chassis.
8th Gen Gets More Complex: Confirmed Kaby Lake
The title of this page is a retrospect as to how Intel has literally thrown away the naming scheme that has driven its core product base for the last few years, confusing everyone (including high profile partners). The previous naming scheme was for the most part unambiguous – each processor ‘generation’ was one specific Core family or Core microarchitecture design. For an enthusiast, the 6th Generation Core family was based around Skylake, or 4th Generation Core family was Haswell. Not anymore.
When it was announced back at Intel's Manufacturing Day that Intel was going to be fluid on product line architecture and naming, it would appear that we (the technology press, the enthusiast community) severely under-estimated how fluid it would be. This is currently how history will see the 8th Generation:
Intel's Core Architecture Cadence (1/7) | |||||
Core Generation | Microarchitecture | Process Node | Release Year | ||
2nd | Sandy Bridge | 32nm | 2011 | ||
3rd | Ivy Bridge | 22nm | 2012 | ||
4th | Haswell | 22nm | 2013 | ||
5th | Broadwell | 14nm | 2014 | ||
6th | Skylake | 14nm | 2015 | ||
7th | Kaby Lake | 14nm+ | 2016 | ||
8th | Kaby Lake-R Coffee Lake-S Kaby Lake-G Cannon Lake-U |
14nm+ 14nm++ 14nm+ 10nm |
2017 2017 2018 2018? |
||
9th | Ice Lake ... |
10nm+ | 2018? | ||
Unknown | Cascade Lake (Server) | ? | ? |
So far, Intel has launched three specific Core microarchitecture designs as ‘8th Generation’ products, and a fourth has been announced. At the high-end, we have the desktop class Coffee Lake processors, using Intel’s latest 14++ process and running up to 8 cores. For mobile, Intel has launched the 15W Kaby Lake Refresh processors, pushing quad-core Kaby Lake parts into where dual-core 7th Generation Kaby Lake hardware used to go. Then there is this new product, Kaby Lake-G, which is not explicitly a refresh, as it uses the same 7th Generation H-series cores as before. The fourth piece of the puzzle is Intel’s first crack at 10nm with Cannon Lake, which at CES 2017 was promised to be shipping by the end of the year in 2017, but unfortunately has missed the target.
Extrapolating this terminology, we can look forward (!) to similar naming in future generations. During 2018 we are expecting Intel to fill out the Coffee Lake processor line, perhaps even bringing it into the market where current 8th Generation parts already exist or perhaps even where 7th Generation parts are. Unfortunately, looking at the processor name and number will no-longer be an indication of the microarchitecture underneath.
Intel’s response to this, to be clear, is that they state that the 8th Generation product portfolio represents the best of what Intel has to offer in each of the respective product segments. Intel’s best will have the highest number, essentially. While this is probably not a bad position to take, it can leave customers in a situation where if the customer has a good last-generation product, but wants to ‘downgrade’ to a mid-range latest-generation product, the user could end up paying for getting the same hardware in return.
Final Words
Does This Make Intel and AMD the Best of Frenemies?
It does seem odd, Intel and AMD working together in a supplier-buyer scenario. Previous interactions between the two have been adversarial at best, or have required calls to lawyers at the worst. So who benefits from this relationship?
AMD: A GPU That Someone Else Sells? Sure, How Many Do You Need?
Some users might point to AMD’s financials as being a reason for this arrangement, in the event that Zen didn’t take off then this was a separate source of income for AMD. Ultimately AMD is looking healthier since Ryzen, and even if Intel did rock up with piles of money, the scope of the product is unclear how much volume Intel would be requesting.
Or some might state that this sort of product, if positioned correctly, would encroach into some of AMD’s markets, such as laptop APUs or laptop GPUs. My response to this is that it actually ends up a win for AMD: Intel is currently aiming at 65W/100W mobile devices, which is a way away from the Ryzen Mobile parts that should come into force during 2018. For every chip they sell to Intel, that’s a sale for them. It means that there discrete-class graphics in a system that might have had an NVIDIA product in it instead. One potential avenue is that NVIDIA’s laptop GPU program is extensive: now with Intel at the helm driving the finished product rather than AMD, there is scope for AMD-based graphics to appear in many more devices than if they went alone. People trust Intel on this, and have done for years: if it is marketed as an Intel product, it’s a win for AMD.
Intel: What Does Intel Get Out Of This?
Intel’s internal graphics, known as ‘Gen’ graphics externally, has been third best behind NVIDIA and AMD for grunt. It had trouble competing against ARM’s Mali in the low power space, and the scaling of the design has not seemed to lend itself to large, 250W GPUs for desktops. If you have been following certain analysts that keep tabs on Intel’s graphics, you might have read the potential woes and missed targets that have potentially happened behind closed doors every time there has been a node shrink. Even though Intel has competed with GT3/4 graphics with eDRAM in the past (known as the Crystalwell products), some of which performed well, they came at additional expense for the OEMs that used them.
So rather than scale Gen graphics out to something bigger, Intel worked with AMD to purchase Radeon Vega. It is unclear if Intel approached NVIDIA to try something similar, as NVIDIA is the bigger company, but AMD has a history of being required by one of Intel’s big OEM partners: Apple. AMD has also had a long term semi-custom silicon strategy in place, while NVIDIA does not advertise that part of their business as such.
What Intel gets is essentially a better version of their old Crystalwell products, albeit at a higher power consumption. The end product, Intel with Radeon RX Vega M Graphics, aim to offer other solutions (namely Intel + NVIDIA MX150/GTX1050) but with reduced board space, allowing for thinner/lighter designs or designs with more battery. A cynic might suggest that either way, it was always going to be an Intel sale, so why bother going to the effort? One of the tracks of Intel’s notebook products in recent years is trying to convince users to upgrade more frequently: for the last couple of years, users who buy 2-in-1s were found to refresh their units quicker than clamshell devices. Intel is trying to do the same thing here with a slightly higher class of product. Whether the investment to create such a product is worth it will bear out in sales numbers.
It's Not Completely Straightforward
One thing is clear though: Intel’s spokespersons that gave us our briefing were trained very specifically to avoid mentioning AMD by name about this product line. Every time I had expected them to say ‘AMD Graphics’ in our pre-briefing, they all said ‘Radeon’. As far as the official line goes, the graphics chip was purchased from ‘Radeon’, not from AMD. I can certainly understand trying to stay on brand message, and avoiding the name from an x86 competitive standpoint, but this product fires a shot across the bow of NVIDIA, not AMD. Call a spade a spade.
Aside from the three devices that will be coming with the new processors, from HP, Dell, and the Intel NUC, one interesting side story came out of this. Intel has already had interest from a cloud gaming company for these new processors. In the same way that a massive GPU based-datacenter can offer many users cloud gaming services, these new chips are set to be in the datacenter for 1080p gaming at super high density, perhaps moreso than current GPU solutions. An interesting thought.
Intel NUC Enthusiast 8: The Hades Canyon Platform
The HP and Dell units are set to be announced later this week during CES. For information about the Intel NUC, using the overclockable Core i7-8809G processor, Ganesh has the details in a separate news post.