Comments Locked

19 Comments

Back to Article

  • yeeeeman - Thursday, January 9, 2020 - link

    "One of the big issues with the older PCIe 3.0 version was the support of the card on different systems. The card worked well on AMD systems, but had issue with Intel systems, because Intel’s PCIe solution did not support multiple endpoints in the same way. With this new solution, that problem ultimately disappears, because Intel has no PCIe 4.0 solution right now" - do I sense a bit of irony here?
  • MenhirMike - Thursday, January 9, 2020 - link

    Also curious what "support multiple endpoints in the same way" actually means - isn't bifurcation just splitting a PCIe slot? What are the actual technical differences between AMD and Intel here?
  • Valantar - Thursday, January 9, 2020 - link

    ... that Intel isn't enabling bifurcation in most cases (at least beyond splitting an x16 into 2x x8 for GPUs on select platforms).
  • boeush - Thursday, January 9, 2020 - link

    Not to worry - the problem's disappearance is only temporary, and will be resolved the moment Intel comes out with its own PCIe4 chipsets.
  • RogerAndOut - Friday, January 10, 2020 - link

    Don't they first have to manage to come out with some PCIe4 CPUs :)
  • Santoval - Friday, January 10, 2020 - link

    Noone how "temporary" it's going to be though. When is Intel going to introduce PCIe 4.0 in the desktop, with Rocket Lake? If so, when is Rocket Lake going to be released, in late 2020 or 2021? Its Ice Lake laptops do not support PCIe 4.0, though even if they did it would be wasted in a laptop.

    The same applies to their Tiger Lake based laptops which were supposed to be released in 2H 2020 but due to Intel's delays in deploying Ice Lake for laptops Q1 2021 looks more likely. Ice Lake for desktop (and probably HEDT as well) appears to have been canned, and the same is going to happen with Tiger Lake because *all* Intel's 10nm node variants have crappy yields at high clocks and/or large dies. Intel has not yet fixed their shit, and I don't think they will until they deploy 7nm in high volume.
  • Santoval - Friday, January 10, 2020 - link

    "Noone *knows* how..."
  • eek2121 - Friday, January 10, 2020 - link

    The burn is real.
  • Santoval - Friday, January 10, 2020 - link

    While it's subtle I also sense it.
  • SaturnusDK - Thursday, January 9, 2020 - link

    I like the comment about compatibility problems on intel platforms is now solved simply due to intel not actually having advanced enough platforms to even fully utilize the card.
  • ingwe - Thursday, January 9, 2020 - link

    Yup glad I read to the bottom for that :D
  • Jorgp2 - Friday, January 10, 2020 - link

    Except it's completely irrelevant as this is a passive device.

    It will work on any system that support bifurcation, be it PCI-E 1 or 4
  • GreenReaper - Thursday, January 9, 2020 - link

    "Creator community" is already my most-hated phrase of 2020. >_<
  • eek2121 - Friday, January 10, 2020 - link

    Artificial market segmentation at it's finest.
  • Dug - Thursday, January 9, 2020 - link

    Great cheap solution without slowdown that you get from adding multiple nvme off of chipset.
  • Kevin G - Friday, January 10, 2020 - link

    I was kinda hooong that this would have a MicroSemi PCIe 4.0 bridge chip to solve the bifurcation issues. Right now PCIe 4.0 M.2 cards can't saturate the 4 lane link so stuffing this into an 8 lane PCIe 4.0 slot or a 16 lane PCIe 3.0 slot wouldn't be that detrimental currently.

    The other bonus of a bridge chip is that some permit things like RAID1 where the mirroring is done on the bridge so performance is maintained while not sacrificing PCIe lanes from the host. Nice for LGA 115X systems with so few lanes to begin with.
  • MenhirMike - Friday, January 10, 2020 - link

    Problem with a bridge is that it drives up the cost significantly. The existing PCIe 3.0 version costs about $50, compared to ~$200 for ones with a bridge chip (which will surely be made by someone at some point)
  • Billy Tallis - Friday, January 10, 2020 - link

    What you're referring to is usually called a PCIe switch. Bridge chips are generally doing some kind of conversion between protocols, so that they can connect two devices that would otherwise not be able to communicate.

    As far as I am aware, Marvell is the only one with a NVMe switch, that operates at the level of the NVMe protocol rather than just PCIe. That's how it can do RAID 0/1, virtualization, and other SSD-specific stuff—but it hasn't been updated for PCIe 4.0 yet. Broadcom/PLX and Microchip/Microsemi PCIe switches cannot do RAID for NVMe devices.

    All of those switch products are priced to only be accessible to enterprise customers. ASMedia has some relatively small PCIe 3.0 switches (up to 8 lane upstream, 16 lane downstream), and even that makes a quad-M.2 card into a ~$250 product.
  • eek2121 - Friday, January 10, 2020 - link

    I haven't looked at individual drives, but many are claiming numbers that are exactly double previous gen. What do you mean they aren't saturating the link?

    I also disagree with others who have stated that we don't need faster storage. Storage remains a huge bottleneck, even today. PCIE could also benefit from lower latency.

Log in

Don't have an account? Sign up now