Comments Locked

20 Comments

Back to Article

  • ksec - Tuesday, April 2, 2019 - link

    From this announcement, so Intel wasn't really interest Mellanox and their bid was only to push the Nvidia's bidding price higher?
  • Kevin G - Tuesday, April 2, 2019 - link

    I wouldn't say that. Without Mellanox, how many other high speed networking companies are out there that implement fabrics other than Ethernet? I'd effectively be Intel or Ethernet. At this point, Intel would be free to raise hardware prices as they see fit. Having a (near) monopoly can pay off.
  • TeXWiller - Tuesday, April 2, 2019 - link

    >For users involved in the networking space, I know what you are going to say: doesn’t Intel >already offer OmniPath at 100G?
    They probably wouldn't say that any more than they would try to build an Ethernet cluster when an Infiniband network is needed for low enough latency..
  • binarp - Thursday, April 4, 2019 - link

    +1 No one in networking would ask this. Infiniband/Omnipath networks are high bandwidth, ultra low latency, often non-blocking, memory interconnect, closed loop networks. Other than high bandwidth Ethernet networks are rarely any of these things.
  • abufrejoval - Tuesday, April 2, 2019 - link

    DDP sounds very much like the programmable data plane that Bigfoot Torfinos implement via the P4 programming language: So is the hardware capable enough and will Intel support P4 or do they want to split (and control) the ecosystem like with Omnipath and Optane?
  • abufrejoval - Wednesday, April 3, 2019 - link

    Barefoot Tofino, sorry, need edit
  • 0ldman79 - Tuesday, April 2, 2019 - link

    To be fair, can an Ivy Bridge even *move* 100Gbps of data over it's NIC?

    I'm not sure how else they could do the demo, most hardware would be the limit. The comparison would have to be against older tech, though I suppose they could have used a more recent comparison.

    Honestly though, for that level of hardware an Ivy Bridge might be the correct comparison. I doubt these devices get changed very frequently.

    We have used a few Core i7 as edge routers for multiple gigabit networking. Not sure what the upper limit is, we haven't hit it yet, even with our Nehalem. I doubt it would handle 10Gbps with a few hundred thousand connections pushing through it.
  • hpglow - Tuesday, April 2, 2019 - link

    No but ivy bridge can't move 50GB/sec over it's port either. It does so the same way a 100 GB/sec connection would be enabled. Through a 16x pcie slot.
  • trackersoft123 - Wednesday, April 3, 2019 - link

    Hi guys,
    do you make difference between 100GB/s and 100Gb/s?
    There is roughly 8 times difference :)
  • Nutty667 - Monday, April 8, 2019 - link

    We did 200Gbps on Broadwell server with UDP packets. You have to use a kernel bypass network framework for this however. We had to use two network cards, as you can only get 100Gbps over each PCIe x16 connection.
  • binarp - Thursday, April 4, 2019 - link

    It's hard to say from the slides here which are largely useless drivel that say something like "application centric packet queuing", but this NIC looks like it is based on the FM line of chips that came into Intel with the Fulcrum acquisition. The FM6000 that put Fulcrum on the map was one of the earliest Ethernet switch ASICs with a meaningfully reconfigurable pipeline.
  • Dug - Friday, April 5, 2019 - link

    I don't see RDMA listed for 500 or 700 series. Seems odd.
  • thehevy - Friday, April 5, 2019 - link

    I wanted to help clarify a point of confusion regarding the ADQ test results for Redis. The performance comparison of open source Redis using 2nd Gen Intel® Xeon® Scalable processors and Intel® Ethernet 800 Series with ADQ vs. without ADQ on the same server (SUT). The test shows the same application running with and without ADQ enabled. It is not comparing Redis running on two different servers.
    This statement does not reflect the actual test configuration used:
    “Unfortunately, it was hard to see how much of a different ADQ did in the examples that Intel provided – they compared a modern Cascade Lake system (equipped with the new E810-CQDA2 dual port 100G Ethernet card and 1TB of Optane DC Persistent Memory) to an old Ivy Bridge system with a dual-port 10G Ethernet card and 128 GB of DRAM (no Optane). While this might be indicative of a generational upgrade to the system, it’s a sizeable upgrade that hides the benefit of the new technology by not providing an apples-to-apples comparison.”
    Tests performed using Redis* Open Source with two 2nd Generation Intel® Xeon® Platinum 8268 processor @ 2.8GHz and Intel® Ethernet 800 series 100GbE on Linux 4.19.18 kernel.
    The client systems used were 11 Dell PowerEdge R720 servers with two Intel Xeon processor E5-2697 v2 @ 2.7GHz and one Intel Ethernet Converged Network Adapter X520-DA2.
    For complete configuration information see the Performance Testing Application Device Queues (ADQ) with Redis Solution Brief at this link: - http://www.intel.com/content/www/us/en/architectur...
    The Intel Ethernet 800 Series is the next-generation architecture providing port speeds of up to 100Gb/s. Here is a link to the Product Preview:
    https://www.intel.com/content/www/us/en/architectu...
    Here is a link to the video of a presentation Anil and I gave at the Tech Field Day on April 2nd.
    https://www.youtube.com/watch?v=f0c6SKo2Fi4
    https://techfieldday.com/event/inteldcevent/

    I hope this helps clarify the Redis performance results.
    Brian Johnson – Solutions Architect, Intel Corporation
    #iamintel

    Performance results are based on Intel internal testing as of February 2019, and may not reflect all publicly available security updates. See configuration disclosure for details. No product or component can be absolutely secure.
  • brunis.dk - Saturday, April 6, 2019 - link

    I just want 10Gb to be affordable
  • abufrejoval - Thursday, April 11, 2019 - link

    Buffalo sells 8 and 12 NBASe-T port switches at €$50/port and Aquantia NICs are €$99.

    It's not affordable for everyone, but remembering what they asked 10 years ago, I fell for it and have no regrets.
  • boe - Sunday, April 7, 2019 - link

    I have no idea what to get at this point. I currently have 10g at home but that isn't enough - my raid controllers are pushing my 10g to the limit (I don't have a switch - I have a single quad port connected to 3 other systems). When I look at throughput on the ports they are maxed out. I was considering 40g but there are no quad port nics and 40g switches are too expensive for me. I was also looking for 25 or 50g or 100g nics but can't find the right option which was simple with 10g. Anyone know of any cheap 25/50 or 100g switches that are cheap or 25 or 50 or 100g quad port nics?
  • abufrejoval - Thursday, April 11, 2019 - link

    Since this seems to be a 'home' setup: You may be better off getting used 25 or even 40Gbit Infiniband NICs and switches: I have seen some really tempting offers on sites specializing on recycling data center equipment.

    Perhaps it's a little difficult to find, but bigger DCs are moving so quickly to 100 and even 400Gbit, that 25/40 get moved out.

    And you can typically run simulated Ethernet on IF as well as TCP/IP at least with Mellanox (sorry Nvidia) gear.

    If you don't have a lot of systems you can also work with Mellanox 100Gbit NICs that are running in host chaining mode, essentially something that looks like FC-AL or Token ring. Only works with the Ethernet personality of those hybrid cards, but you avoid the switch at the cost of some latency when there are too many hops (I wouldn't use more than 4 chained hosts). I am running that with 3 fat Xeon nodes and because these NICs are dual ports I can even avoid the latency and bandwidth drop of the extra hop, going right or left in the 'ring'.

    The feature was given a lot of public visibility in 2016, but since then Mellanox management must have realized that they might sell fewer switches if everyone started using 100Gbit host chaning to avoid 25 or 40 Gbit upgrades, so it's tuck away in a corner without 'official' support, but it works.

    And if your setup is dense you can use direct connect cables and these 100Gbit NICs are much cheaper than if you have to add a new 40Gbit switch.
  • boe - Wednesday, May 8, 2019 - link

    I'd like to know how many media access controllers they support. I have 4 computers I want to connect at greater than 10g (what I currently have and max). I can connect each direct without a switch because I have a quad port 10g on one. I doubt we'll see any quad port 100g nics (although that would be great). An option would be using a fanout cable but that requires more than two media access controllers per nic . The dual port Mellanox cards only have 2 per dual card which means you can't use fannouts to connect to other nics directly.
  • boe - Tuesday, November 26, 2019 - link

    Haven't they been talking about this for about a year? When do we see the product available for sale?
  • Guy111 - Thursday, August 27, 2020 - link

    hi,
    I have e810 cam-2
    I created bridge on the card and no get rx packets from ixia.
    Other card (diffrent brand) works well.
    I installed centos 7.6 with ice driver.
    Why?

Log in

Don't have an account? Sign up now