Skip to main content

Link aggregation between CentOS 5 and a SLM2024

It's been a while since I made time to try something new. This week, I finally took something off of the "need to try this" list: link aggregation. I've had a gigabit Ethernet switch with link aggregation for about a year now, and my main Linux box has 3 gigE NICs, but I was still only using one. Time for change.

Google found me some good documentation for channel bonding on CentOS5. Manually editing the ifcfg-eth{0,1,2}, ifcfg-bond0, and modprobe.conf is all that's required. That worked, but the default bonding setting is "balance-rr", the simplest loadbalancing algorithm. What I wanted to use was full IEEE 802.3ad link aggregation, mode 4 of the bonding module.

During testing, I got fooled into believing that "service network restart" unloaded and reloaded the bonding module. It doesn't, I should have tested using "service network stop; rmmod bonding; service network start" from the start. Learned my lesson, configured the switch into LACP mode (dynamic link aggregation instead of static), and I was on for some bandwidth testing.

I tried a couple of different bandwidth eaters, but floodping, NFS reads, didn't really stress the configuration. In comes netcat: "nc -l 5555 > /dev/null" on one side, and "nc myserver 5555 < /dev/zero" on the other, and you'll get a gigabit stream of data in no-time. Using dstat and a couple of netcats, the current record stands at more than 200 MBps. Mission accomplished !

Comments

So you never had any problems with the SLM2024 doing mode 4 channel bonding? I'm thinking of purchasing one for a College Cloud but there's a Newegg post that says channel bonding doesn't work on it.

The cloud will be between a bunch of XCP hosts (CentOS based).

Are you still running in this mode and do you see any reliability issues?