Comments Locked

51 Comments

Back to Article

  • bcronce - Friday, September 28, 2018 - link

    I can't wait for a fanless 16port+. And not Netgear. But nice to see the state of things. Thanks for the article.
  • nagi603 - Monday, October 1, 2018 - link

    Yeah, me too. There are some people, who modded smaller Netgear switches with Noctua fans, but it's still no comparison to an actual fanless switch.
  • deil - Friday, September 28, 2018 - link

    Well I wanted 2.5Gbps/5Gbps network at home so single or dual 10 GBps and 2-6 slower ones like 2.5Gbps or 5 Gbps are fine. Still I want small device to hide behind furniture, not such corpo sized thing.
  • saratoga4 - Friday, September 28, 2018 - link

    >There's also the possibility of doing some network card testing in the future now.

    Yes please!
  • Death666Angel - Friday, September 28, 2018 - link

    Thanks for the article. Amazon.de currently has a 8 x 10Gbe switch for 320€ while everywhere else it's 500€+. Almost seems tempting. Unfortunately, I don't really need it, so 320€ is 320€ I can save. Plus, there are no mATX AM4 boards with integrated 10Gbe. The first mATX B450 board with great VRMs and 10Gbe under 200€ gets my purchase. :D Or really any mATX X470 board. :(
  • dgingeri - Friday, September 28, 2018 - link

    A lot of this is presupposing that we're stuck on 10gbase-T. I went with Intel X520 cards for $90 each plus the SFP+ for $20 and a Dlink DGS-1510-28X for $400, and was able to get 4 10G on my network to my servers and main machine for far cheaper than 10gbase-T offers right now. I don't get people's need for using that format.
  • Death666Angel - Friday, September 28, 2018 - link

    Tons of catx cable runs in existing buildings? Just a guess here. Also, 500USD++ for 4 10G ports does not sound remotely cheap.
  • dgingeri - Saturday, September 29, 2018 - link

    1. it doesn't help when that old Cat5e or cat6 cable can't run 10Gb because it either isn't up to spec or wasn't installed to spec for 10Gb. I've already seen that mess up someone's idea to put 10Gb in their office. They had to run a new cable run, and that cost $225.

    2. For the switch, it is only 4 10Gb ports, but it was also 24 1Gb ports, and I'm using 11 of those, so that switch for $400 was well worth it. Few people need all their ports to be 10Gb at this point. As for the cards and SFP+ modules, they're far cheaper than so many people keep claiming, and cheaper than most 10gbase-T cards. (I've heard some people talk about expensive cables, since they are fiber optic, or even complain about the color, but I got my white 3m OM-3 cables for $11. I dare anyone to find 3m cat6a cables for that cheap. Plus, OM-3 isn't vulnerable to EM interference, at all.)
  • dgingeri - Saturday, September 29, 2018 - link

    Oh, and we're also talking 10G in home networking in this context, not corporate.
  • dgingeri - Saturday, September 29, 2018 - link

    Looking up what I bought previously, the prices have changed a bit, but the overall cost is about the same. The switch is cheaper, but the NICs are more expensive:
    DGS-1510-28X - $367.50 - https://www.amazon.com/D-Link-Systems-SmartPro-Sta...
    10GTek Intel 82599 single port NIC - $129 - https://www.amazon.com/10Gtek-E10G42BTDA-Ethernet-...
    SFP+ module for 10Gtek NIC - $21 - https://www.amazon.com/10Gtek-Intel-E10GSFPSR-Tran...
  • thewishy - Monday, October 1, 2018 - link

    Exactly the same setup as I have for home. Fileserver, ESXi boxes and my desktop are all hooked up to 10 gig because I chuck a lot of files around. Don't need to waste a 10g ports on a CCTV camera
  • JoeyJoJo123 - Monday, October 1, 2018 - link

    >it doesn't help when that old Cat5e or cat6 cable can't run 10Gb because it either isn't up to spec or wasn't installed to spec for 10Gb.

    Please stop with this nonsense. Cat5e, Cat6 (not an official cabling standard, by the way), Cat6a, Cat7 are all twisted pair copper cables with the same port shape and number of connectors with the same order to the pinout. The difference is shielding and effective SNR.

    Can you connect 10Gig switch with existing "Cat5e" cable runs? Sure can do boss. Some of the runs may be noisier/longer than others and may not transfer at the best rate, but it'll connect up just fine. There are even products that can effectively measure the transfer bandwidth that you can get out of an existing cable run. So you may not even need to rip out every single run and replace it for new cabling. If you're not happy with the existing bandwidth out of your current cabling solution you can replace the individual cable runs to cabling that's at a higher spec.

    https://www.amazon.com/dp/B00Q6Y0LIA

    Every time people talk about "Waaahhh the Cat5e cables won't work" I die a little bit inside. It's not like it's some completely different cable, so you might as well be coming off like a Monster Cables salesman telling us all about the wonders of your new diamond-plated "True 4K 60hz High Speed Ethernet over HDMI with Audio Return Channel" cables.
  • BGADK - Wednesday, October 3, 2018 - link

    I have several clients running 10GBase-T on Cat5e cables on full speed. No problem, and no errors in the logs. The cable runs are nowhere the maximum 100 meters, but more like 25-35 meters, so that is a factor.
    Before we start, I allways tell the client, that there is no guarantee that their Cat5e installation is going to work, but there is no reason to rip out the old cables and replace them with new cat6a cables, before we test if 10GBit is going to work with their existing cat5e installation.
    And until now, that has allways been the case. Cat5e do really work with 10GBase-T, even if it is not a sure thing.
  • cygnus1 - Saturday, September 29, 2018 - link

    I went the SFP+ route for my home lab as well, but I went with the new mikrotik 16 10g port switch (CRS317-1G-16S+) that was only about $350. it's worked great so far. it's not technically fanless, but the fans never spin so I never hear it. With 16 ports, it let me do dual 10G to my handful of lab servers, plus 1 to my main desktop. it's been awesome
  • dgingeri - Saturday, September 29, 2018 - link

    While it is pretty cool that there's a 16 port 10G SFP+ switch, Microtik's hardware warranty is only 1 month, so I wouldn't be comfortable buying one.

    On the other hand, I found Trendnet has released a switch with the same port config as my Dlink DGS-1510-28X for $100 cheaper. I'm not as comfortable with Trendnet's reliability or warranty as I am Dlink, but it is a nice option.
  • daviderickson - Friday, September 28, 2018 - link

    If you care at all about latency, don't go for copper at 10G. We did measurements here and copper was 10x higher than a direct attach cable.
  • Death666Angel - Friday, September 28, 2018 - link

    As someone not "in the know", what is "direct attach cable". And why would that be different to 10Gbe R45 ethernet? Some links and further info would be appreciated.
  • praeses - Friday, September 28, 2018 - link

    Direct Attach Cable or DAC is used for 10G+, it has an integrated interface that goes in an SFP+ slot on either end of the cable and requires less circuitry on either end to convert it to a longer range signal/protocol. This dramatically reduces power/latency per port and can often be cheaper for short range (15m, maybe longer now). Generally from the following perspectives (greater is not always better):
    Power consumption:
    Copper (RJ45) > Fiber > DAC
    Range:
    Fiber > Copper (RJ45) > DAC
    Cost: (recently, this has been changing)
    Fiber > Copper (RJ45) > DAC
    Latency:
    Copper (RJ45) > Fiber > DAC
    Inter-compatibility between manufacturers/devices:
    Fiber > Copper (RJ45) > DAC

    There are best use cases for each especially when budget/existing infrastructure are taken into consideration. Copper will be mostly used in the home because it's easy to work with and cabling is cheap. Otherwise generally you will want to use DAC where you can, then Fiber, then Copper. DAC cable management (sometimes stiffer cables/limited bending radius/limited length options) can be a little more annoying.
  • sor - Saturday, September 29, 2018 - link

    We always called it twinax, usually had SFP+ or QSFP, sometimes a QSFP on one side going out to four SFP+ on the other.

    I liked it because for runs less than 10m it was far cheaper than buying optics for either end but performed just as well.
  • dgingeri - Saturday, September 29, 2018 - link

    Well, I can tell you SFP+ cables are now much more expensive than optical, unless you're insistent on using Cisco or HP equipment. Generic SFP+ optic modules can now be found in the $20-25 range, and OM-3 cables are as cheap as $15 for 3m and $25 for 10m. Compared to the $60-100 range for SFP+ cables no matter the length, it's quite cheap these days.
  • sor - Saturday, September 29, 2018 - link

    I haven’t really been into it for the last few years, but we used to get 3m twinax cables for $20, 5m cables for $30 from Melannox distributors. That’s short but the vast majority of what we did was wiring within a single rack, which these lengths are perfect for. Sounds like the fiber cost hasn’t changed much, at best fiber is double twinax for two optics and then the fiber usually at least 3x.

    I just googled and it looks like SFP+ cables are $10-$20 for short lengths. If you buy a high end Cisco one it would be $39 on Amazon.

    When you’ve got 60 cables per rack that’s significant savings, not to mention no optics module maintenance later.
  • sor - Saturday, September 29, 2018 - link

    Check out the below link, $16-20 for most common lengths. This is retail, you can find better, especially in volume.

    https://store.mellanox.com/categories/interconnect...
  • dgingeri - Saturday, September 29, 2018 - link

    huh. my usual experience was that SFP+ cables were expensive and troublesome. Every time I've looked them up, they were far more expensive than getting 2 optical modules and an OM-3 cable. In addition, I had some Cisco SFP+ cables I got with a UCS loaner for my old employer's test lab that worked fine with Cisco equipment, but would not work with any of the Qlogic, Intel, or Broadcom 10Gb NICs I had there. I also had some SFP+ cables that came with a HP Procurve that had similar compatibility issues. I was simply unwilling to deal with that again.
  • ZeDestructor - Saturday, September 29, 2018 - link

    This is true of Active Optical Cables (which are an utter joke of a design, IMO). Basic 3m or less twinax cables are much, much cheaper, both branded and unbranded.
  • cygnus1 - Monday, October 1, 2018 - link

    That wasn't my experience at all. Maybe that's something to do with the HP and Cisco branded cables, as the circuits on each end can probably be hard coded to their devices. For my home lab, with a Mikrotik switch, I was able to find some pretty cheap DAC cables that connected fine to some old Mellanox NICs.
  • praeses - Friday, September 28, 2018 - link

    "how do you review a switch anyway?"

    I can tell you how I review a switch (for work, then if it passes the test there I'll get a small one for home) by going through a checklist on whether it's viable in the first place:
    1) Does it have some sort of OOB management (serial, usb, dedicated management interface, or removable memory?)
    2) Does it leak any information without logging in? (welcome pages, etc.)
    3) Can it be recovered from a bad configuration (someone miss clicking a setting) via some sort of test before apply, undo, or reasonable reset configuration?
    4) How well does it handle firmware updates? (simple process or unnecessarily convoluted, how many reboots)
    5) How long does it take to reboot? 1 min, 5 min, (10 or more is an instant-fail)
    6) Can the configuration be exported/imported in some sort of text file (if not it's usually an instant-fail)
    7) Does it have SSH access? (If not it's usually an instant-fail)
    8) Can all functionality be changed by both SSH and the GUI?
    9) Are the commands contiguous or do different sections of the configuration use weird syntax compared to the rest like how some integrate features from acquired companies but have not yet standardized the language?
    10) How well does it handle multiple roles/permission levels/accounts?
    11) Can I have another tech work on it without knowing what some brand-specific lingo means?
    12) Does it automatically correct issues such as not allowing the default VLAN ID to bet set when a port is not a member of that VLAN?
    13) How many menus (command-line or gui) does it require to make changes such as VLANs?
    14) Does the GUI require any software to be downloaded (including web browser add-ons)? Does the software work with all platforms I use? Typically if it's not https/HTML5 works on all current browsers it's an instant-fail.
    15) How does it handle certificates for the web interface?
    16) What is the track record for the company to support a model line like this one for security/reliability updates?
    17) Does it require some crappy vendor locked in SFP module or DAC? Can it get proper SFP media link status and other statistics?

    Other things like SNMP, statistics, inventory, etc. are big bonuses too. Stuff like that I go through before doing performance testing. Performance testing seems to be wire-speed in most cases unless doing some layer 3 stuff so I tend to look at the management side of things first unless it's specifically for some SAN or similar.

    I think in a month or two PoE IEEE 802.3bz is supposed to be finalized so I'm mostly waiting on products supporting that.
  • atomt - Saturday, September 29, 2018 - link

    18) Delivers packets in the correct order.
    Had a Mikrotik CSS326 fail on this during testing the other day. Seems that if a flow hits the port buffers packets gets dequeued in the wrong order.

    19) Advertised features like IGMP/MLD multicast snooping, DHCP guard/snooping/binding, IPv6 RA-Guard, Energy Efficient Ethernet actually works and not a huge pain to set up.

    I have a switch here that fucks up LLDP in a way that makes all link partners disable EEE if more than one EEE-supporting device is connected.

    20) Has sufficiently good buffer management to handle mismatched port speeds and other congestion properly. While not suffering from bufferbloat.
  • praeses - Saturday, September 29, 2018 - link

    ooph.. yeah I have ran into 20 before and 18 I could see being a huge headache
  • jumbo_ - Sunday, September 30, 2018 - link

    agreed. You see, reviewing switches is as complex as reviewing any kind of other IT gear. It is less about "speed", since switches use ASICs for forwarding the actual traffic which bypasses the control plane CPU and more about actual "experience". To add more on top of the above:
    21) will the vendor provide software/firmware upgrades 3/6/12/24/36 months after?
    22) port to port latency (typically in micro-seconds)
    23) does it crash every 2 weeks?
    24) verify performance claims - 20Gbps (in + out) in 70byte frames is something completely different than 20Gbps of 1500 byte frames
    24a) similar to above, just differrent traffic patterns - one to many and many to one
    25) verify 10/100/1000 Gbps support, half duplex, full duplex
    26) can you get basic operation information from the CLI/GUI that will allow you actually troubleshoot. This is surprisingly the biggest problem with SoHO switches.
    26a) current information about interface up/down, pps in, pps out, ouput buffer drops, L1 errors
    26b) historical information about 26a, CPU %
    26c) feature status, (spanning tree, snooping)
  • BillyONeal - Monday, October 1, 2018 - link

    Most of that stuff is about *managed* switches that are wholly irrelevant in a small home / home office scenario like this.
  • mode_13h - Saturday, September 29, 2018 - link

    Nice mini-review. I was surprised that one connection only saturated 1/3rd of the end-to-end bandwidth. Consider experimenting with jumbo frames, next.

    > I’m sure I can find a place for the cats to sit on it and enjoy.

    And get it clogged with cat fur? That will surely get the fans to start spinning...
  • atomt - Saturday, September 29, 2018 - link

    *strokes the 32 port 40Gbe Arista switch currenly powering my home while breathing heavily*

    "ed: I still think you're insane"

    You can take the insanity further. I believe in you.
  • Psycho_McCrazy - Saturday, September 29, 2018 - link

    Serenity now, Insanity later!

    (And serenity implies running cat6 cables at my home to prevent a youtube stream to saturate the WiFi and prevent access to the file server.)
  • imaheadcase - Saturday, September 29, 2018 - link

    Wait a tick, you can get 5-8 port 10Gb switches on amazon for around $250-300. Or are they just cheaper in the US?
  • imaheadcase - Saturday, September 29, 2018 - link

    I keep thinking of upgrading my home network to it, but the sad reality outside a few situations its not needed yet. Especially since most things you connect to network still have 1Gb ports. I mean you can stream 4k fine and as long as you got fast internet its easier to download steam stuff than worry about a cache system.
  • V900 - Saturday, September 29, 2018 - link

    Whats the advantage of a switch?

    Suppose someone got a total of ten devices at home and a 5 port router. Half of the devices use the router through LAN cable, and the other five are connected through Wi-Fi.

    What would be the advantage of hooking a ten port switch to the router, and connecting all ten devices with LAN cable to the switch?
  • ZeDestructor - Saturday, September 29, 2018 - link

    Some of us just have that many wired devices.

    In my tiny home network, I have:

    1x10G desktop
    1 laptop dock
    2 APs
    2x10G for the big, heavy, noisy rackmounted server
    1 for the aforementioned server's IPMI (cause that's not on the 10G links of said server)
    1 for the modem (modem lives on a VLAN that I then trunk to the aforementioned server)
    1 for the TV STB
    1 for the printer

    That's 10 ports (because some devices devour ports, like my server). And I consider my network positively tiny. And I've been religiously ignoring IoT so far, cause I have seen nothing I find useful enough to dive into the quagmire of security fails.
  • ZeDestructor - Saturday, September 29, 2018 - link

    Oh, and let's not forget keeping unused wall ports connected, so that future ZeDestructor doesn't have to go into the shed and move wires around when nerdy guests are over.
  • wolrah - Saturday, September 29, 2018 - link

    > What would be the advantage of hooking a ten port switch to the router, and connecting all ten devices with LAN cable to the switch?

    Since I can't be sure if you understand this, you're already using a switch. Your "router" is just a combination device that has it built in. The actual router part is a system-on-a-chip that has usually two ethernet interfaces on it. One is the WAN port, the other is wired internally to one of the ports on a separate switch chip and the rest of those are exposed as your "LAN" ports. WiFi is connected over some other internal interface, either PCI Express or something proprietary from the SoC vendor. There are some variants on this design but that's the basic idea behind pretty much every home/small business "router".

    The reason to add another switch would be if you have run out of ports, or need ports somewhere far from the original device where running an additional cable would be impractical.

    The reason to use more ports in your scenario would be because WiFi is unreliable and low performance compared to a wire. For some use cases this may not matter, especially if you're in a rural area with limited RF interference, but those in dense urban environments should definitely wire everything they can and even many of us like myself who have relatively clear RF spectrum prefer to wire whatever we can just to know that the network will not be a problem.

    If I try to use my wireless headphones with my Steam Link in my bedroom that has no wired ethernet connection, stream quality goes down and latency goes up as they compete over spectrum. The wired Steam Link in my living room works perfectly no matter what's going on.

    My personal rule is that if it doesn't move it should be wired if at all possible. Desktop computers, printers, set-top boxes, game consoles, VoIP phones, NAS boxes, servers, cameras, etc. WiFi is for portable devices and low bandwidth IoT type stuff that gets placed in locations that'd be inconvenient to wire.

    I hate the fact that so many consumer-tier devices are being made now that are designed to live permanently near a computer or TV screen (aka the most likely places to have wiring available) but don't have ethernet ports. There are printers that only have WiFi. Printers! And don't get me started on set top boxes. I'll let it slide in the "TV stick" formfactor as long as USB-OTG adapters are supported like Chromecast does, but things like the Nintendo Wii or most of the current non-stick Roku line not having a wired network port is insane.
  • wolrah - Saturday, September 29, 2018 - link

    Forgot to also note one other reason one might add a new switch to a network would be the primary reason for this article, to increase speed beyond what one's previous switch supported. Very few routers have 2.5/5/10G ports at all, and even less of them are combination switch/router devices offering >1G speeds on the switch ports. If you want 10G networking you're either buying a switch or directly connecting the machines, and direct connections don't really scale well beyond three machines.
  • abufrejoval - Sunday, September 30, 2018 - link

    Beat you to it a couple of weeks ago using a 12-port Buffalo Technology BS-MP2012 at €600 or €50/port including taxes, initial report is somewhere here on this site.

    The Aquantia NICs were down to €80/piece for a week or so, so I upgraded all my home-lab’s core servers.

    Been on that very same mission for 10 years and only stumbled across that 12-port NBase-T switch in summer. I had been using direct connect cables with Intel and Broadcom 10Gbase-T adapters before, but removed them from my home-lab, because those NICs required too much cooling at 10Watts/port: Those were dual port NICs targeting rack-mount servers with serious air-flow, and they kept dying in my desktops.

    With Aquantia this is down to 3Watts/port (1xx series on the NIC, three 4xx series chips on the switch for a total of around 40Watts TDP), which works just fine with my noise-optimized home-lab desktop-technology servers.

    And noise was the major challenge with the Buffalo switch, too, as the original fans are just not “desktop-compatible”, but need to remove 40 Watts of heat. I installed Noctua 40x40x20mm fans with constant air-flow, voiding all warranties and putting the life of my family at risk, but I can no longer hear it, while it just gets a little warm, not hot.

    Incidentally last week I also went the next step to 100Gbit/s in the corporate lab!

    Mellanox offers hybrid NICs, ConnectX-5 adapters that will support both Ethernet and Infiniband semantics, even NVMe over fabric so you get “memory”, “network” and “storage” semantics across a single fabric at close to PCIe 3.0 x16 limits.

    Since the NIC and the switch silicon is essentially the same, only a different size, the Mellanox engineers decided to include a “host-chaining” mode, which allows you to daisy-chain NICs using cheap direct-connect cables (€100/piece) without a switch, similar to ARC-Net, Token-Ring or Fibre-Channel/Arbitrated Loop (FC-AL). Of course it means a shared medium, so it doesn’t scale, but at 100Gbit it takes 10 ports to surpass 10Gbit in star formation. And then you can just create meshes etc. adding more NICs to your servers: Composable, hyper-converged hardware, a CIO’s whet dream!

    Obviously Mellanox management wasn’t too happy about that, so currently it only works with the Ethernet personality of the VPI NICs and I only managed to massage 30Gbit/s out of these links, even if the boxes are beefy Scalable Gold Xeons.

    I find this daisy chaining mode extremely intriguing because you can build all sorts of interconnect topologies, while you save on the jump costs of central switches.
  • oRAirwolf - Monday, October 1, 2018 - link

    Nice catch. I would have happily paid that. I would be very interested to see some comparisons done between an aqc107, x540, x550, and a mellanox connectx-3. I use the aqc107 and connectx-3 in my home network and would love to see some data about CPU usage and latency.
  • piroroadkill - Monday, October 1, 2018 - link

    The cheapest I know of (that I also bought) is the Netgear MS510TX. It has a 10Gbit SFP port, a 10Gbit copper port, and then 2× 5Gbit ports, and then 2×2.5Gbit ports, alongside 4× 1Gbit ports.
    I feel like that's actually a pretty decent setup for the price ($270).

    However, I have nothing but issues with the Aquantia cards and this switch, regardless of cables. I've tried newer drivers, older drivers, different operating systems, forget it. It's so flaky that I go back to using the onboard Intel NICs, which never have issues.
  • cm2187 - Monday, October 1, 2018 - link

    What I am really waiting for to fully adopt 10GBE is the ability to have longer thunderbolt 3 cables. Almost all laptops have no ethernet ports anymore, let alone 10 gigabit, but a laptop connected by a single cable to a dock that would do power + 10GBE would work for me. But it's not lapable with a 50cm cable. I kind of need 2m-ish.
  • doggface - Tuesday, October 2, 2018 - link

    I would imagine my use case would be common in that I have a NAS, that has peak transfer speed of 1Gbit (~100MB/s). According to WD my Nas drives can hit ~2-250MBps so 5G would change my bottleneck to the HDDs not the network interface and make things like SSD caching a valid exercise. SSD based storage would also be an interesting proposition too especially since the $ p/GB is dropping. But this would require at minimum a 5 port 5G switch which I am probably not going to buy above $100-150.

    I don't do pro-sumer or pro-grade stuff so I don't have the necessity that others have here and I need to get this past the WAF.
  • alpha754293 - Thursday, October 4, 2018 - link

    @Ian
    Thank you for this preview/"review".

    In regards to your question in your article about "how do you review a switch anyways?", what I would be looking for would be the full round trip point-to-point latency in a variety of loading conditions as well as raw throughput.

    Sadly though, there aren't very many (if ANY at all), "consumer grade" apps that would never measure or know anything about that because on the flip side of the coin, there are also only a handful of commercial/enterprise apps that would also care (mostly database apps) and pretty much ALL HPC and distributed processing/distributed computing apps would care about round trip point-to-point latencies and raw throughput.

    To give you an example, I have a home office where I perform engineering analysis and simulations and at any given point, a single simulation can be pushing ~2 TB of data over the network (which I currently only have a (1) GbE network as my interconnect fabric).

    (There are MPI testing tools to help measure point-to-point network latency and bandwidth and basically is you thrash the switch in a number of different loading conditions (i.e. n x n computers talking to each other, or somehow simulate that if you have actually have n x n computers to play with).)

    I've been looking to upgrade my interconnect fabric for what I do and need/use something like this for at home, and ~$30/port is significantly more affordable than IB EDR (where a 36-port IB EDR switch is $11,525 (~9000 GBP) or approximately $320/port and the adapter cards can range from almost $400 per port to $660 per port). In other words, VERY expensive. (Granted, IB EDR has a peak theorectical throughput rate that's 10 TIMES that of 10G-BaseT, but it's still very expensive.)

    So it would be interesting to read in-depth tests in regards to these topics when reviewing the switch.

    It would also be interesting to see if the switch was managed such that you can aggregate multiple links together for high performance and to see how much of an increase in throughput that can deliver vs. just using multiple unbanded ports.
  • Fratslop - Monday, June 17, 2019 - link

    Or, you can get yourself a brocade 6610 for ~300 on ebay. 48 1gig poe ports, 8 10gig sfp+, and 4 40gig ports in the back.

Log in

Don't have an account? Sign up now