The product page on the QNAP website says it works on regular PCs not just on QNAP NAS systems. I would even buy one, but it has no USB or video output. So I can't use it as a 2nd system in my PC.
"Each node has its own 10 GbE NIC and runs its own copy of QNAP’s mQTS operating system"
It is as if each node is a full NAS one a PCI-E card. The PCI-E connection behaves as a 10GB NIC in this case, I guess it avoids having complicated PCI-E bridges between two PCI-E root hubs.
However, you can (with mQTS) run virtual machines (I believe) on these cards, so it would be like having you own cluster within your PC.
Exactly. This card is basically 2 autonomous PCs. PCIe is there just to provide physical interfaces. There's actually a chance this could work not plugged into a PCIe slot (it has a separate reset button). The only problem is: you'll never know (apart from the moving fans), because there's no other way to connect to this thing
The Pentium II, early Pentium III's, early Xeons and early Athlon's all came on cards. The reasoning was cost as they all had a separate back side cache bus that would have added layers to the motherboard otherwise. This also gave Intel/AMD the opportunity to mix/match cache size easily.
Apple was a big pusher of processor cards with their efforts predating the mainstream adoption from the PC side. They were also one of the few to leave them behind, arguably in 2013.
In the server space back in the day, many RISC systems came on cards. A handful of high end systems today still do so to increase RAS, reduce downtime etc.
If you go even farther back there were addin cards that could give you a whole second computer of a different type. One of my early 90s teachers had an x86 card of some sort that let his apple computer also be an ibm compatible PC. (It's far enough back that I don't recall any details like if it was an Apple 2 or a Mac; or what x86 CPU was on the card.)
Given that these have _four_ 10GbE chips on board plus a PLX chip to allow both to communicate with the host (not to mention two CPUs with tray prices of $107->$400) I'm guessing this is in the "if you have to ask, you can't afford it" price brakcet.
Still a really cool implementation of hardware, though. Essentially two NUCs with 10GBe but no physical I/O on a PCIe card.
I dont get this at all. From a relability stand point, seems it adds a whole layer of something else to fail. From a performance stand point, your disk subsystem is usually the bottle neck, not the CPU.
It doesn't make your NAS run faster. It's 2 more servers. You plug this into a NAS and you have 3 separate systems (with 3 separate IPs). They only share the storage.
Since QNAP supports VMs and containers, it's not that difficult to imagine a use case. :-)
Pretty sure you covered half of it (transcoding is a big draw for a number of uses their boxes can be put to), with the other half being that they're limited by the 75W power delivery to a single board. The 7567U tends to stay at or below rated TDP even in turbo states, whereas the 8650U is 15W in name only and needs between 32W and 45W to properly stretch its legs.
The board has a 6-pin PCIe power connector, so they're not limited to 75W, but I'm guessing the choice is partly having a use for the iGPU, and partly getting a deal from Intel on previous-gen parts (they certainly aren't giving out deals on current-gen parts!). CPU compute is likely not what you'd use a board like this for anyway (if that was the case, why not use a single 8th-gen 45W 6c12t CPU?), so increasing the iGPU power sounds smart.
It could just be that this took awhile to develop. The time frame from 7th to 8th to 9th generations was rather quick.
If these sell, I suspect we'll see a 2019 revision a year from now with those specs. I'd fathom that there would also be a 64 GB of memory per node option.
You're likely right (though it's been a while since these CPUs launched, in Q1 '17; the 8th-gen ones should be pin-compatible drop-in replacements with a BIOS update). Also, the 64GB-per-node version will arrive as soon as 32GB SODIMMs start arriving for real; there's no reason these CPUs shouldn't support 64GB, no matter what the spec sheets say.
...So you can't imagine anyone having already bought a several-thousand-dollar NAS and then discover (after a while) that it'd be useful to also have it rune some additional workloads without replacing the entire unit? That's very practically minded of you.
Also: home server? This is a $3700 add-in card. It is not for home use.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
28 Comments
Back to Article
CheapSushi - Thursday, November 22, 2018 - link
Maybe I missed it but any chance of it working with non-QNAP systems?notb - Thursday, November 22, 2018 - link
There's surely a chance, but it's not clear whether QNAP allows this.Generally speaking, it's 2 mini PCs on PCIe - both running QNAP OS. Both have separate RAM, small storage and network interface.
This add-on card doesn't make your NAS faster. The host system sees these 2 tiny PCs as separate servers on the network.
QNAP OS is a Linux, so it's just a question of replacing it another distro.
stanleyipkiss - Thursday, November 22, 2018 - link
The product page on the QNAP website says it works on regular PCs not just on QNAP NAS systems. I would even buy one, but it has no USB or video output. So I can't use it as a 2nd system in my PC.jab701 - Thursday, November 22, 2018 - link
"Each node has its own 10 GbE NIC and runs its own copy of QNAP’s mQTS operating system"It is as if each node is a full NAS one a PCI-E card. The PCI-E connection behaves as a 10GB NIC in this case, I guess it avoids having complicated PCI-E bridges between two PCI-E root hubs.
However, you can (with mQTS) run virtual machines (I believe) on these cards, so it would be like having you own cluster within your PC.
notb - Thursday, November 22, 2018 - link
Exactly. This card is basically 2 autonomous PCs. PCIe is there just to provide physical interfaces. There's actually a chance this could work not plugged into a PCIe slot (it has a separate reset button).The only problem is: you'll never know (apart from the moving fans), because there's no other way to connect to this thing
Klimax - Thursday, November 22, 2018 - link
Reminds me of old upgrade CPU cards.shabby - Thursday, November 22, 2018 - link
Which ones? I remember those overdrive cpus but never addin cards.piroroadkill - Friday, November 23, 2018 - link
Maybe he's thinking of Slot 1 and Slot A CPUs, or cache on a stick.DeathReborn - Friday, November 23, 2018 - link
Back in the olden days some motherboards had addon boards to upgrade the CPU like this one: http://www.asrock.com/mb/spec/upgrade.asp?Model=am...Kevin G - Sunday, November 25, 2018 - link
The Pentium II, early Pentium III's, early Xeons and early Athlon's all came on cards. The reasoning was cost as they all had a separate back side cache bus that would have added layers to the motherboard otherwise. This also gave Intel/AMD the opportunity to mix/match cache size easily.Apple was a big pusher of processor cards with their efforts predating the mainstream adoption from the PC side. They were also one of the few to leave them behind, arguably in 2013.
In the server space back in the day, many RISC systems came on cards. A handful of high end systems today still do so to increase RAS, reduce downtime etc.
DanNeely - Monday, November 26, 2018 - link
If you go even farther back there were addin cards that could give you a whole second computer of a different type. One of my early 90s teachers had an x86 card of some sort that let his apple computer also be an ibm compatible PC. (It's far enough back that I don't recall any details like if it was an Apple 2 or a Mac; or what x86 CPU was on the card.)lazarpandar - Thursday, November 22, 2018 - link
That’s wild. I can’t imagine needing extra CPUs.. maybe if it was something with more distributed math chops...Valantar - Thursday, November 22, 2018 - link
Given that these have _four_ 10GbE chips on board plus a PLX chip to allow both to communicate with the host (not to mention two CPUs with tray prices of $107->$400) I'm guessing this is in the "if you have to ask, you can't afford it" price brakcet.Still a really cool implementation of hardware, though. Essentially two NUCs with 10GBe but no physical I/O on a PCIe card.
Hereiam2005 - Thursday, November 22, 2018 - link
It's available right now for 4k a pop.https://www.mouser.com/ProductDetail/IEI/Mustang-2...
Not that terrible for the hardware, really.
Valantar - Friday, November 23, 2018 - link
Hm, yeah, that's not outrageous. Still twice what I've ever spent on a PC, but I was halfway expecting more than that.Supercell99 - Thursday, November 22, 2018 - link
I dont get this at all. From a relability stand point, seems it adds a whole layer of something else to fail. From a performance stand point, your disk subsystem is usually the bottle neck, not the CPU.notb - Thursday, November 22, 2018 - link
It doesn't make your NAS run faster. It's 2 more servers. You plug this into a NAS and you have 3 separate systems (with 3 separate IPs). They only share the storage.Since QNAP supports VMs and containers, it's not that difficult to imagine a use case. :-)
PixyMisa - Thursday, November 22, 2018 - link
I wonder why the 7567U though. The 8650U is faster and has a lower TDP (15W vs 28W). Unless they want the faster iGPU for media transcoding.Spunjji - Friday, November 23, 2018 - link
Pretty sure you covered half of it (transcoding is a big draw for a number of uses their boxes can be put to), with the other half being that they're limited by the 75W power delivery to a single board. The 7567U tends to stay at or below rated TDP even in turbo states, whereas the 8650U is 15W in name only and needs between 32W and 45W to properly stretch its legs.Valantar - Friday, November 23, 2018 - link
The board has a 6-pin PCIe power connector, so they're not limited to 75W, but I'm guessing the choice is partly having a use for the iGPU, and partly getting a deal from Intel on previous-gen parts (they certainly aren't giving out deals on current-gen parts!). CPU compute is likely not what you'd use a board like this for anyway (if that was the case, why not use a single 8th-gen 45W 6c12t CPU?), so increasing the iGPU power sounds smart.Kevin G - Sunday, November 25, 2018 - link
It could just be that this took awhile to develop. The time frame from 7th to 8th to 9th generations was rather quick.If these sell, I suspect we'll see a 2019 revision a year from now with those specs. I'd fathom that there would also be a 64 GB of memory per node option.
Valantar - Monday, November 26, 2018 - link
You're likely right (though it's been a while since these CPUs launched, in Q1 '17; the 8th-gen ones should be pin-compatible drop-in replacements with a BIOS update). Also, the 64GB-per-node version will arrive as soon as 32GB SODIMMs start arriving for real; there's no reason these CPUs shouldn't support 64GB, no matter what the spec sheets say.paperfist - Friday, November 23, 2018 - link
I can build my own NAS box with this?Valantar - Friday, November 23, 2018 - link
No. It's an add-on card to add compute power to existing NAS units. It has no I/O other than the PCIe port.Morawka - Friday, November 23, 2018 - link
It's going to be very very expensive.milkod2001 - Monday, November 26, 2018 - link
I'd rather get NAS with better CPUs to start with or home server. This seems like solution for almost not existing problem.Valantar - Monday, November 26, 2018 - link
...So you can't imagine anyone having already bought a several-thousand-dollar NAS and then discover (after a while) that it'd be useful to also have it rune some additional workloads without replacing the entire unit? That's very practically minded of you.Also: home server? This is a $3700 add-in card. It is not for home use.
liamlewis - Monday, November 26, 2018 - link
A special-purpose accelerator based on two Intel CPUs!192.168.0.1 https://www.19216801.page/