Comments Locked

33 Comments

Back to Article

  • powerarmour - Wednesday, January 20, 2021 - link

    I still prefer my Hades Canyon over these new ones tbh, they've actually reduced the display outputs, no dual LAN etc, and a last gen Turing is a bit odd (instead of their own DG/Xe Graphics), but still a lot of power for a small size again.

    Even the i7-8809G CPU is still slightly faster than an i7-1165G7
  • powerarmour - Wednesday, January 20, 2021 - link

    I'd also add that if you're a Windows user, then you're SOL mostly in regards to Polaris 22/Vega M drivers with the Hades Canyon (absolutely fine with Linux though), so there is always that.
  • damianrobertjones - Wednesday, January 20, 2021 - link

    There's instructions online on how to update to the latest AMD drivers.
  • powerarmour - Wednesday, January 20, 2021 - link

    I know, that's why I said 'mostly' ;), this box still runs better on Linux though IMHO.
  • ava1ar - Wednesday, January 20, 2021 - link

    Same here. My main concerns are luck of 10GbE (I wanted one on Hades and ended up using external card via TH3), only 4 Cores CPU (Intel doesn't have anything suitable with more cores right now) and redundant NVIDIA.
    Missing 10GbE is completely unclear, taking into account MSRP price - Intel produced such cards and can easily fit one without breaking the budget.
    CPU used is very mainstream and not very impressive for the "enthusiast" machine. And comparing to Skull and Hades, it is just regular one, while before Intel was trying to put their more special CPUs into these machines.
    TB4 doesn't give much day-to-day advantage over TB3 as of today.
    Hades is capable of running 64Gb of PC-3200 RAM thanks to XMP profile support (I am running Crucial Ballistix 3200 32Gbx2 Kit right now).
    WiFi/BT in Hades is upgradable and AX200 works just fine there.

    So, very difficult to find a valid reason to prefer new over old (except may be driver issue for Windows users). One small benefit is a support for 22110 SSD, which allows to install 380Gb Optane 905P, which someone may find useful.

    I would be much more interested if they make something closer to the original Skull Canyon in slimmer case with just Intel Xe graphics and for MSRP closer to the $650 (which is MSRP of Skull Canyon). Being 2x more expensive and with troublesome NVIDIA GPU for Linux users, it definitely a hard pass from my side.
  • ava1ar - Wednesday, January 20, 2021 - link

    Forgot to mention - Hades officially supports 6 screens versus Phantom only 4. Downgrade here as well.
  • powerarmour - Wednesday, January 20, 2021 - link

    Indeed, as a Linux user myself I've certainly not had any issues with mine, and it runs great with the latest LTS 5.10.1 kernel w/Mesa 20.x and RADV ACO, it's still quite performant.
  • ganeshts - Wednesday, January 20, 2021 - link

    The display outputs support MST, so you can always have more than 1 monitor in the DP chain. In fact, I *think* you can go up to 8 displays on this (4 from iGPU, 4 from dGPU)
  • ava1ar - Wednesday, January 20, 2021 - link

    Intel says it is 4: https://ark.intel.com/content/www/us/en/ark/produc...
  • ganeshts - Thursday, January 21, 2021 - link

    I got official clarification from Intel:


    We expect that, when utilizing Type-C to DP MST splitters, customers may see a satisfactory experience up to 6 total 4K@60hz monitors. In this configuration, four of the panels would be driven by the Intel Xe Graphics, and two would be natively connected to the GeForce RTX 2060 through mDP1.4 and HDMI 2.0.

    The Thunderbolt 4 specification and certification requires every Thunderbolt 4 configured port to support the dual display functionality. This means that the front and rear Thunderbolt 4 ports can be split into 2x 4K@60Hz panels each.


    So, pretty much no regression compared to Hades Canyon wrt number of displays supported.
  • ava1ar - Thursday, January 21, 2021 - link

    This is great, however to achieve 6 you will need to use at least two with MST support, while Hades allow to connect each of them individually. Not a principal difference though, just a small note.
  • e36Jeff - Wednesday, January 20, 2021 - link

    How are they running PCI-e gen4 to a 2060? Everything I can find says that GPU is PCI-e gen 3. Did nVidia custom-make a gen4 2060 for Intel?
  • NextGen_Gamer - Wednesday, January 20, 2021 - link

    I was thinking the same thing. AnandTech points out that Intel is using the PCIe 4.0 lanes that are part of the processor itself to connect to the GPU, which is all fine and dandy. But last I checked, the entire NVIDIA "Turing" lineup never featured PCIe 4.0 support on it... so I don't see how those PCIe 4.0 lanes from the CPU are actually running in 4.0 mode.
  • jeremyshaw - Wednesday, January 20, 2021 - link

    The smallest Turing did seem to pick up Gen4 support, the MX450. Dunno if it's at gen4 PHY speeds, but it claims to be supported.
  • Jorgp2 - Wednesday, January 20, 2021 - link

    It's running in Gen-3 mode.

    He just means they used the CPUs Gen 4 lanes to hook up the GPU, instead of connecting them to an m. 2 slot.

    Otherwise the GPU would be connected to the chipset
  • Arsenica - Wednesday, January 20, 2021 - link

    In Tiger lake the "chipset" is integrated in the same package as the rest of the chip. Internally uses DMA 3.0 as interface but re-branded as OPI with the equivalent bandwidth of 4 PCIe gen3 lanes.
  • powerarmour - Wednesday, January 20, 2021 - link

    If it's only effectively four lanes of Gen 3.0 that's another bit of a gimp, Hades Canyon had 8x Gen 3.0 dedicated to the GPU.
  • Targon - Wednesday, January 20, 2021 - link

    Probably PCIe 3.0 to the GPU, but then PCIe 4.0 for storage.
  • ava1ar - Wednesday, January 20, 2021 - link

    They wire PCIe 4.0 lanes to the GPU (even GPU can't do Gen4 and uses Gen3 instead), so the storage is Gen3 for sure (no more Gen4 lanes left)
  • mukiex - Thursday, January 21, 2021 - link

    I wouldn't be surprised if they have some kind of PCI-E 4 to 3 shim chip so they could cut the lane count in half. With the 8 lanes of NVMe and 2 lanes for LAN/SD Storage/Audio (odd choice, there), they've only got 10 left for the graphics card, and an x8 PCI-E 4.0 connection would be identical to an x16 PCI-E 3.0 connection.
  • mukiex - Thursday, January 21, 2021 - link

    Actually it's probably just a 4.0 connection to a 3.0 chip and it's basically doing nothing with half the bandwidth.
  • dromoxen - Friday, January 22, 2021 - link

    Yeah surely connect Pci-3 device to Pci-4 , its compat , but runs at PCI-3 ? no questions?
    I think AMD powered NUCs are better CPU, lower gfx and much lower price. So intel get the industrial customers, AMD (powered) mops up the rest.
    Sticking a 30 series in this would have upped the price beyond sanity, if available even.
  • bill44 - Wednesday, January 20, 2021 - link

    Who will be out first with an SFF/NUC that sports HDMI 2.1?
    Also, what happened to 4x TB4 support on TL?
    Still waiting for a NUC that has 10Gbe, HDMI 2.1, 4x TB4/USB4, DP 2.0 and WiFi 6E.
    2023 maybe?
  • KimGitz - Wednesday, January 20, 2021 - link

    Exactly.
    In 2023 I hope for 4x Thunderbolt X with support for PCIe 6.0. PCIe 6.0 will be huge step from PCIe 3.0 Implemented on Thunderbolt 4, with PAM4 and FEC.

    Going with a NVIDIA GeForce RTX 3060 on this would have allowed them to take advantage of Resizable BAR.
  • hubick - Wednesday, January 20, 2021 - link

    I had Skull Canyon, where 45w TDP was decent, and then I got two Hades Canyon (home+office), cuz the 100w TDP was awesome for a package that size.

    I have a 4K 40" screen and HP Reverb VR headset, and I was thinking it could be nice to hook those up to an eGPU box with an RTX 3080 or 6800XT which I can use with either my laptop or NUC.

    I was looking forward to these as an upgrade for that, because having Thunderbolt 4 on the CPU is super awesome for driving an eGPU, but the 28w TDP is really disappointing, and I wonder if it's enough oomph to drive the eGPU. I mean, my Razer Book 13 is already 28w, so on paper this isn't nearly as exciting/useful as if it was 100w TDP again like Hades Canyon was.
  • timecop1818 - Wednesday, January 20, 2021 - link

    hades canyon was 100w because of 75W AMD GPU on-die.
  • hubick - Wednesday, January 20, 2021 - link

    Hades Canyon CPU is configured for 65W TDP, and you can certainly exercise that if disabling the AMD iGPU for an eGPU.

    "This is simply a test of CPU performance. As expected, the Core i7-8809G with its 65W processor TDP slots closer to the Core i7-6700 and the Core i7-7700." - https://www.anandtech.com/show/12572/the-intel-had...

    "The cores manage to consistently stay above the rated clock (3.1 GHz) under all loading conditions. Given the higher power level (65W) that the CPU is configured for, we find that it stays close to 3.9 GHz till the CPU die starts to approach the 100C junction temperature." - https://www.anandtech.com/show/12572/the-intel-had...

    The Hades Canyon thermal solution is rated for a 100W TDP in total though, and assuming Phantom has a similar setup, it would be nice if disabling the RTX dGPU for an eGPU got you more gains than a 28w TDP envelope. I can get that from a 1cm thick Ultrabook - this thing should be able to do better.
  • Spunjji - Thursday, January 21, 2021 - link

    Releasing this right after Nvidia announced the mobile 3060 is a bit disappointing. As the article notes, the pricing puts it into direct contention with gaming notebooks which offer more functionality. I'm not convinced.
  • nils_ - Friday, January 22, 2021 - link

    The block diagram is a bit surprising, I thought Tiger Lake only has 4 PCIe 4 lanes total, how do they manage 4 to the GPU as well as 4 to the m.2?
  • ava1ar - Friday, January 22, 2021 - link

    m.2 is connected to PCIe3 - it is clearly mentioned in table.
  • Eastman - Wednesday, January 27, 2021 - link

    I'm juste disappointed they didn't try to implement their HK chips into this. I wouldn't mind a slightly thicker case. An 8 core 10980HK would have been nice. I know they have the ghost/quartz canyon with discrete gpu but they shoud've taken that and implemented into the phantom canyon.
  • JoeDuarte - Wednesday, January 27, 2021 - link

    Can someone explain the M.2 SSD notations here? I've never understood all these different strings used to refer to M.2.

    For Phantom Lake, the table says "M.2 22x80/110 (key M)". For Hades Canyon, it says "M.2 22x42/80 (key M)". What do these numbers represent? Like 80/110 vs 42/80?

    Thanks.
  • Kelly2 - Monday, April 12, 2021 - link

    > The M.2 standard allows module widths of 12, 16, 22 and 30 mm, and lengths of 16, 26, 30, 38, 42, 60, 80 and 110 mm.

    https://en.wikipedia.org/wiki/M.2#Form_factors_and...

Log in

Don't have an account? Sign up now