Instant On - Wired

 View Only
  • 1.  Terrible performance with flow control enabled on a 1930 switch

    Posted 01-10-2023 03:54 AM


    I'm running into a weird issue. I've recently added a 10 Gbps Mellanox Connectx-3 card into my HP Microserver Gen 10+. I have an Aruba Instant On 1930 24G 4SFP+ switch, and a Ubiquiti U6 Pro, a U6 Lite and an AC LR. I use RHEL 8.7, with latest drivers for the Mellanox card directly from NVIDIA. The SFP+ transceivers are from and are coded for the Aruba switch and Mellanox, respectively. 

    There's a 10 Gbps link between the switch and the server. Although I don't have any 10 Gbps capable clients at the moment, I can confirm that it achieves speeds over 1 Gbps. I tested the speed with multiple laptops using iperf and also Samba Multichannel works with four USB dongles at full speed, given that the server has multiple IPs on the 10G interface. Basically, I can saturate every 1 Gbps device/NIC I have in my house and hit the 10 Gbps card with almost 6 Gbps of traffic.

    But, all my APs are capped at a download speed of around 300 Mbps or less. Upload speed is unaffected. I haven't touched my wifi settings. Forcing 1 Gbps on the SFP+ switchport is enough to get full speeds immediately, even during an iperf test. My expected speeds are 1 Gbps/1 Gbps for the U6 Pro (160 MHz), 750/750 for the U6 Lite and around 600/600 for the AC LR.

    I have found out that this issue is probably related to flow control. I enabled it on my switch. Although enabling flow control solved the issue for APs, all of them reach their full respective speeds just like before the NIC upgrade (1Gbps for U6 Pro, 800 Mbps for U6 Lite, 600 Mbps for AC LR), but another, even more annoying issue has popped up.

    All my wired devices experience terrible network performance, a single device can still reach ~1 Gbps, but adding another device decreases the speed to 100 Mbps for one of them, adding more wired devices results in aggregate throughput via the 10 Gbps card to less than 1.5 Gbps. 

    Although my switch does not allow disabling flow control per port. I found a note on the page 46 in the user guide: Apparently disabling auto negotiation disables flow control so I forced 1 Gbps on every port except for the server and the APs. But wired devices still experienced connectivity issues just like before.

    I am at loss. I don't know what device in my network causes this behavior, but it's rendered my network worse then before the upgrade (I used bonded interfaces on my server before).

    Any help is greatly appreciated. Thank you.

    Roman Romancik

  • 2.  RE: Terrible performance with flow control enabled on a 1930 switch

    Posted 01-10-2023 06:20 PM
    I still experience the issue but I noticed it can be influenced by fiddling with QoS settings. Is it possible to setup QoS to balance the speed between the 10Gbps NIC and 1 Gbps APs instead of flow control? These are my settings

    Roman Romancik