Comments Locked

31 Comments

Back to Article

  • rpg1966 - Thursday, November 12, 2020 - link

    Extraordinary price for such a simple board. Yes yes, I know.
  • Billy Tallis - Thursday, November 12, 2020 - link

    Simple board, crazy expensive switch chip, and bundled software that really ought to be standard OS functionality.
  • lightningz71 - Friday, November 13, 2020 - link

    Excluding boot capabilities, I know for a fact that Windows 10 and Linux have internal mechanisms for building RAID arrays of many different types. MS storage spaces, who's full features are accessible from powershell in windows 10, can do many things very well. Linux has ZFS and several other tool sets to do the same things.
  • Gigaplex - Saturday, November 14, 2020 - link

    The Windows RAID support is junk and not worth using in mission critical systems. And the performance of Storage Spaces is terrible, there's no way you'd put hardware of this calibre under it.
  • pinoyians - Monday, November 16, 2020 - link

    totally agree. Nuts to be implementing it on Mission Critical settings.
  • charlesg - Monday, November 16, 2020 - link

    Agreed on storage spaces. I used it for ~40TB of data (parity configuration) that worked pretty well for a few years. Then the May "spring update" caused corruption to parity storage spaces. Their solution? Make it read only. I think it was finally fixed in August, which is WAY too long for something like that.
  • Tomatotech - Friday, November 13, 2020 - link

    At around $100+/TB, 32TB is $3,200, and 64TB is $6,400.

    The price of the card is around 15%-20% which is affordable. In that kind of system, running 20-28GB/s, the card has to be ultra-reliable with fast support for any bugs. That's what costs.

    This might be the first time I've seen a single consumer card (including GPUs!) coming close to maxing out the PCIe 4.0 interface. Just in time for PCIe 5.0 in 2021 or 2022.
  • rpg1966 - Friday, November 13, 2020 - link

    Yes, but apart from the switch chip, the board must cost pennies to make. I understand how pricing works, but it still seems weird.
  • Spunjji - Monday, November 16, 2020 - link

    Only if you exclude the costs of design, prototyping, readying for production, writing software, testing and validation, packaging, and marketing.

    They probably won't be selling hundreds of thousands of these, so those are going to be a bigger proportion of the cost than for, say, a mid-range GPU.
  • alpha754293 - Thursday, November 12, 2020 - link

    So...it's a race to see how fast you can burn through that write endurance limit of the M.2 NVMe SSDs?

    (U.2 is a bit better, but it's also at least DOUBLE the cost, if not more. Even with a U.2 NVMe SSD that as a write endurance of 10 DWPD, you can still burn through that fairly quickly if you're working with multi-GB video files, constantly.)

    (Depending on the resolution, colour depth, frame rate, and duration, you can get a pretty good estimate as to how many video streams the drive can handle within its write endurance limit, otherwise, you're looking at "premature" drive failures.)
  • Unashamed_unoriginal_username_x86 - Friday, November 13, 2020 - link

    you're still writing at the same speed per drive (though for half as much useful data on mirrored arrays), so your drives will be utilized the same amount.
  • croc - Friday, November 13, 2020 - link

    The issue that I have with HighPoint is that their cards act more as aggregators than true controllers. They offer no raid level that the CPU does not do natively. So, just what are they then controlling? And, given that they have to be aimed at the AMD market, (whose software raid includes MORE levels...) why would anyone buy them when ASUS and Gigabyte offer an aggregator for free? Maybe if they offered hardware raid at raid levels 5 and 6... But only having levels 0, 1 and 10 is pretty limiting. Especially since you would get the same raid levels just using AMDs software, with direct-to-CPU levels of performance.
  • Billy Tallis - Friday, November 13, 2020 - link

    Your complaint applies to almost all NVMe RAID solutions. There simply aren't hardware RAID controllers for NVMe that are comparable to what we're accustomed to seeing for SATA and SAS. What ASUS, Gigabyte and similar offer are passive riser cards that rely on the host motherboard to support bifurcation, and on the CPU/chipset vendor to provide a software RAID implementation that may or may not be any faster or more flexible than HighPoint's software RAID implementation. HighPoint's NVMe RAID cards get rid of both of those requirements, but in practice those two issues aren't often a deal-breaker in this price range. The 8-drive HighPoint cards do have a more clear advantage.
  • Hul8 - Sunday, November 15, 2020 - link

    Since this card uses a PLX chip, it should be possible to drive it with a PCIe x8 link and still retain use of all its M.2 slots. That's not possible with the passive cards - there if you have x8 lanes from the MB, even with bifurcation you only get 2 usable M.2 slots.
  • Hul8 - Sunday, November 15, 2020 - link

    Nevermind you can't bifurcate x16 into 8 times x4 without a PLX chip.
  • saratoga4 - Saturday, November 14, 2020 - link

    >The issue that I have with HighPoint is that their cards act more as aggregators than true controllers.

    Hardware RAID died with SATA.
  • Duncan Macdonald - Friday, November 13, 2020 - link

    If you have enough PCIe lanes (eg on a Threadripper or EPYC platform) then the MUCH cheaper ASUS Hyper M.2 card is a better option. (No expensive PCIe switch - just a fan to keep everything cool - available from Amazon UK for £65.30) This is a passive card which directly connects up to 4 NVMe SSDs to a PCIe x16 socket - the CPU and motherboard must support splitting the x16 to 4 x4 connections.
  • bernstein - Friday, November 13, 2020 - link

    was going to make the exact same point. the irony is that, these are the only platforms the target market for these cards buys... so this is aimes at the niche of xeon/epyc/threadripper buyers that still don’t have enough pcie lanes... which in the epyc case is basically noone ;-)
  • TootsieNootan - Saturday, November 14, 2020 - link

    The Threadripper 3960x and higher have 72 PCIE lanes. Plenty for raid controllers. Even the Threadripper 2 series did fine on PCIE lanes. I had an older SSD7102 from Highpoint in my old machine and it did fine as well. I looked at the i9x chips which have more lanes but not nearly as many. The Xeon and Epyc cpu's have a lot of lanes but i needed a workstation and not a server.
  • TootsieNootan - Saturday, November 14, 2020 - link

    One of the big factors to consider when running PCIE 4.0 m.2 drives is excessive heat. I looked at the Asus controller but was worried it had inadequate cooling. A fan and no heatsinks. The Highpoint its one giant heatsink with a fan. I went with Highpoint because i didnt want to risk expensive m.2's overheating and frying.
  • msroadkill612 - Sunday, November 15, 2020 - link

    There is a big difference between cooling an nvme lying flat on the mobo m.2 port and those on a vertical card. I would be confident I could improvise something.
  • msroadkill612 - Sunday, November 15, 2020 - link

    Ah yes but....

    The Asus quad nvme adapter card also works in an 8 lane slot hosting 2x nvme.

    Many former TR workstation users are finding they can just squeeze into a far cheaper 12 or 16 core AM4 - at times even prefer it.

    W/ PCIE 4 & a suitable bifurcated mobo, an 8 lane PCIE 4 gpu retains the bandwidth of a 16 lane pcie 3 GPU, yet frees 8 lanes for use by the 2nd 8 lane slot.

    This could be pushed to 3x nvme raid if including the native am4 nvme by ~booting on a sata ssd.

    A triple nvme raid, 64GB & 16 cores is quite a beast for a relative pittance.
  • quorm - Monday, November 16, 2020 - link

    Heck, the asrock trx40 taichi mobo comes with an expansion card that handles four pcie4 m.2 drives.
  • TootsieNootan - Friday, November 13, 2020 - link

    I have a Highpoint 7505 with 4 Samsung 980 Pros in it. Read is 24k Write is 18.8K. Its my boot drive so my system and all my apps load super fast when I'm not waiting for the CPU to catch up.
  • TootsieNootan - Friday, November 13, 2020 - link

    Forgot to mention. I did run into one problem with the Highpoint 7505 if your are using it as a boot drive. which I assume will effect all models is that VMWare complains about EFI drives and says in order to run a virtual machine I have to give it 96 gigs for ram.
  • TomWomack - Saturday, November 14, 2020 - link

    Does anyone know whether these can be used as JBOD (insert eight drives, get /dev/nvme0n1 through /dev/nvme0n8) ? It feels there ought to be scope for a Chinese manufacturer to provide much, much cheaper PCIe v4 switch chips; yes, the interface electronics are hard to design, but they're exactly the sort of thing that the Chinese government seems willing to subsidise in order to get indigenous provision.
  • Billy Tallis - Monday, November 16, 2020 - link

    Yes, you can use these in JBOD mode. It's software RAID, so without their RAID drivers you just have a PCIe switch routing lanes to eight NVMe drives. (Though it would be /dev/nvme0n1 through /dev/nvme7n1, since each drive is a different NVMe controller rather than a different namespace on the same controller.)

    ASMedia does make PCIe gen3 switches, but only up to 24 lanes (so x8 up, x4+x4+x4+x4 downstream ports). Microchip/Microsemi and Broadcom/PLX are the only two options for gen4 switches, or large gen3 switches.
  • carcakes - Monday, November 16, 2020 - link

    Wonderful! 8X! They used to ditch Store MI for something new...havent seen any new tool since that time..
  • carcakes - Monday, November 16, 2020 - link

    Snaps :// https://highpoint-tech.com/USA_new/series-ssd7500-... <img src="https://highpoint-tech.com/USA_new/images/ima20042...
  • abufrejoval - Wednesday, November 18, 2020 - link

    The move by Avago (now Broadcom) grab all PCIe switch IP and raise prices through the roof, evidently paid off for them.

    I'm not even sure they sell such a lot of them in servers, but in storage appliances.

    But since the IOD in Zen is pretty much a PCIe switch yet comes at a far lower price, I wonder if one couldn't simply use X570 chips instead.

    AMD and Intel have the IP, they could both certainly crash the Broadcom party, right?
  • carcakes - Friday, April 30, 2021 - link

    Just installed 8x : Asrock M.2 2280 B+M key VGA module

    https://www.asrockrack.com/general/productdetail.a...

Log in

Don't have an account? Sign up now