How To Do Linux Network Bonding Teaming in Mint Debian Ubuntu

Bonding is an excellent way to get both increased redundancy and throughput.  It is similar to the "Network Teaming" feature in Windows.

There are a few different modes but we will use mode 6, I think it's the best of both worlds, as it is not just a failover, but it provides round robin, so you will get redundancy and load balancing.  So if you have a 1G single port, you will have a combined throughput of 4G at this point.  Just bear in mind that the true throughput depends on the types of load your server is running and the type(s) of storage you are using.  If you have a RAID array or non-RAID array that cannot deliver 4G of disk bandwidth and you're server files, then it will be a bottleneck.

Note that the only modes that DON'T Require LACP/Etherchannel config on the switch are modes: 1,5,6

In our example we are going to take a Debian based server with 4 NIC ports (eth0, eth1, eth2, eth3).  We've changed the NICs to have proper names instead of the original names enp4s0f0, enp4s0f1, enp4s1f0, enp4s1f1.

We enabled the BIOS Dev names feature in the kernel to get the eth0 naming convention back (this will ensure the possiblity that your NICs will work if you move your HDDs/RAID array to another physical server).

More about Bonding from the Linux Kernel.

Bonding Debian Documentation

Bonding Mode Info

The modes in bold 1,5,6 are the ones that do not require any special switch config.

Bonding Mode     Configuration on the Switch
0 - balance-rr     Requires static Etherchannel enabled (not LACP-negotiated)
1 - active-backup     Requires autonomous ports
2 - balance-xor     Requires static Etherchannel enabled (not LACP-negotiated)
3 - broadcast     Requires static Etherchannel enabled (not LACP-negotiated)
4 - 802.3ad     Requires LACP-negotiated Etherchannel enabled
5 - balance-tlb     Requires autonomous ports
6 - balance-alb     Requires autonomous ports

You will see later on that when creating our bond that we can either specify the number or name


bond-mode 0

bond-mode balance-rr

I prefer not to use 802.3ad unless necessary, as our goal is portability and flexibility and 802.3ad needs LACP/etherchannel configuration on the switch, or in other words, if you plugin to a port that is not configured for LACP, your bond will not work.


	Specifies one of the bonding policies. The default is
	balance-rr (round robin).  Possible values are:

	balance-rr or 0

		Round-robin policy: Transmit packets in sequential
		order from the first available slave through the
		last.  This mode provides load balancing and fault

	active-backup or 1

		Active-backup policy: Only one slave in the bond is
		active.  A different slave becomes active if, and only
		if, the active slave fails.  The bond's MAC address is
		externally visible on only one port (network adapter)
		to avoid confusing the switch.

		In bonding version 2.6.2 or later, when a failover
		occurs in active-backup mode, bonding will issue one
		or more gratuitous ARPs on the newly active slave.
		One gratuitous ARP is issued for the bonding master
		interface and each VLAN interfaces configured above
		it, provided that the interface has at least one IP
		address configured.  Gratuitous ARPs issued for VLAN
		interfaces are tagged with the appropriate VLAN id.

		This mode provides fault tolerance.  The primary
		option, documented below, affects the behavior of this

	balance-xor or 2

		XOR policy: Transmit based on the selected transmit
		hash policy.  The default policy is a simple [(source
		MAC address XOR'd with destination MAC address XOR
		packet type ID) modulo slave count].  Alternate transmit
		policies may be	selected via the xmit_hash_policy option,
		described below.

		This mode provides load balancing and fault tolerance.

	broadcast or 3

		Broadcast policy: transmits everything on all slave
		interfaces.  This mode provides fault tolerance.

	802.3ad or 4

		IEEE 802.3ad Dynamic link aggregation.  Creates
		aggregation groups that share the same speed and
		duplex settings.  Utilizes all slaves in the active
		aggregator according to the 802.3ad specification.

		Slave selection for outgoing traffic is done according
		to the transmit hash policy, which may be changed from
		the default simple XOR policy via the xmit_hash_policy
		option, documented below.  Note that not all transmit
		policies may be 802.3ad compliant, particularly in
		regards to the packet mis-ordering requirements of
		section 43.2.4 of the 802.3ad standard.  Differing
		peer implementations will have varying tolerances for


		1. Ethtool support in the base drivers for retrieving
		the speed and duplex of each slave.

		2. A switch that supports IEEE 802.3ad Dynamic link

		Most switches will require some type of configuration
		to enable 802.3ad mode.

	balance-tlb or 5

		Adaptive transmit load balancing: channel bonding that
		does not require any special switch support.

		In tlb_dynamic_lb=1 mode; the outgoing traffic is
		distributed according to the current load (computed
		relative to the speed) on each slave.

		In tlb_dynamic_lb=0 mode; the load balancing based on
		current load is disabled and the load is distributed
		only using the hash distribution.

		Incoming traffic is received by the current slave.
		If the receiving slave fails, another slave takes over
		the MAC address of the failed receiving slave.


		Ethtool support in the base drivers for retrieving the
		speed of each slave.

	balance-alb or 6

		Adaptive load balancing: includes balance-tlb plus
		receive load balancing (rlb) for IPV4 traffic, and
		does not require any special switch support.  The
		receive load balancing is achieved by ARP negotiation.
		The bonding driver intercepts the ARP Replies sent by
		the local system on their way out and overwrites the
		source hardware address with the unique hardware
		address of one of the slaves in the bond such that
		different peers use different hardware addresses for
		the server.

		Receive traffic from connections created by the server
		is also balanced.  When the local system sends an ARP
		Request the bonding driver copies and saves the peer's
		IP information from the ARP packet.  When the ARP
		Reply arrives from the peer, its hardware address is
		retrieved and the bonding driver initiates an ARP
		reply to this peer assigning it to one of the slaves
		in the bond.  A problematic outcome of using ARP
		negotiation for balancing is that each time that an
		ARP request is broadcast it uses the hardware address
		of the bond.  Hence, peers learn the hardware address
		of the bond and the balancing of receive traffic
		collapses to the current slave.  This is handled by
		sending updates (ARP Replies) to all the peers with
		their individually assigned hardware address such that
		the traffic is redistributed.  Receive traffic is also
		redistributed when a new slave is added to the bond
		and when an inactive slave is re-activated.  The
		receive load is distributed sequentially (round robin)
		among the group of highest speed slaves in the bond.

		When a link is reconnected or a new slave joins the
		bond the receive traffic is redistributed among all
		active slaves in the bond by initiating ARP Replies
		with the selected MAC address to each of the
		clients. The updelay parameter (detailed below) must
		be set to a value equal or greater than the switch's
		forwarding delay so that the ARP Replies sent to the
		peers will not be blocked by the switch.

1.) Install ifenslave or this will not work, bonding needs the ifenslave package:

apt install ifenslave

Disable NetworkManager.

2.) Modify /etc/network/interfaces

In this example it is a server with 4 NICs named

eth0 eth1 eth2 eth3 #adjust to what you have.

The order here really matters or things will NOT work.  We need to bring up the individual NICs first, otherwise the NICs will fail to join the bond0 and your networking will be broken.

Explanation of bonding in /etc/network/interfaces

1. First we specify each NIC that will be part of our bond, as being "auto", manual and "bond-master bond0".

2. Next we define our bond0, using auto, choosing bond-mode 0 for round-robin and declaring that we don't have any bond-slaves.  This is OK since we are actually enslaving the devices to bond0 in the iface statement for each NIC.

3. Finally we setup our br0, which we tell to just use the bond0 port (whereas typically for bridging br0 in Linux we would tell br0 to use actual physical NIC interfaces).

# interfaces(5) file used by ifup(8) and ifdown(8
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
    bond-master bond0
auto eth1
iface eth1 inet manual
    bond-master bond0
auto eth2
iface eth2 inet manual
    bond-master bond0
auto eth3
iface eth3 inet manual
    bond-master bond0

auto bond0
iface bond0 inet manual
    bond-mode 6
    bond-slaves none

auto br0
iface br0 inet static
  bridge_ports bond0
  bridge_stp off
  bridge_fd 0
  bridge_maxwait 0

3.) Apply the changes and reboot

The easiest way to get bonding working properly is to reboot the system, otherwise your bond will either not start or only 1 slave will join the bond.

However, if you want to give it a quick shot you could bring down the network and then bring it back up.

systemctl stop networking

systemctl start networking

If it doesn't work, trust me, just restart the whole machine and you will be better off.


How to check the status of the bonding interface

cat /proc/net/bonding/bond0

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: load balancing (xor)
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:00:00:00:80:50
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:00:00:00:80:52
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:00:00:00:80:54
Slave queue ID: 0

Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:00:00:00:80:56
Slave queue ID: 0

Bonding Errors

br0: received packet on bond0 with own address as source address


If you get this /br0_received_packet_on_bond0_with_own_address_as_source_address_Linux_Solution_Mint_Debian_Redhat_CentOS_bridge_bridging-2430-articles on your bridge check this out.



linux, bonding, teaming, mint, debian, ubuntubonding, increased, redundancy, throughput, quot, feature, modes, mode, worlds, failover, provides, balancing, combined, server, raid, array, disk, bandwidth, bottleneck, nic, ports, eth, ve, nics, enp, enabled, bios, dev, kernel, naming, convention, ensure, possiblity, hdds, https, realtechtalk, linux_how_to_change_nic_name_to_eth, _instead_of_enps, _or_enp, articles, documentation, info, configuration, rr, requires, static, etherchannel, lacp, negotiated, active, autonomous, xor, broadcast, tlb, alb, creating, specify, eg, portability, flexibility, plugin, configured, specifies, policies, default, transmit, packets, sequential, tolerance, fails, externally, visible, adapter, occurs, gratuitous, arps, newly, arp, issued, interface, vlan, interfaces, provided, ip, tagged, primary, documented, affects, selected, hash, destination, packet, modulo, alternate, via, xmit_hash_policy, described, transmits, ieee, dynamic, aggregation, creates, groups, duplex, settings, utilizes, slaves, aggregator, specification, outgoing, compliant, regards, mis, requirements, differing, peer, implementations, varying, tolerances, noncompliance, prerequisites, ethtool, drivers, retrieving, supports, switches, enable, adaptive, tlb_dynamic_lb, distributed, computed, relative, disabled, incoming, receiving, prerequisite, includes, rlb, ipv, achieved, negotiation, intercepts, replies, overwrites, hardware, peers, addresses, connections, balanced, copies, saves, reply, arrives, retrieved, initiates, assigning, problematic, outcome, hence, collapses, updates, individually, assigned, redistributed, inactive, activated, sequentially, reconnected, joins, initiating, updelay, parameter, detailed, forwarding, blocked, install, ifenslave, apt, modify, etc, bridged, ifup, ifdown, auto, br, iface, inet, loopback, manual, netmask, gateway, bridge_ports, bridge_stp, bridge_fd, bridge_maxwait, reboot, bridging, possilbe, restart, generally, recommended, systemctl, networking, doesn, proc, ethernet, april, mii, polling, interval, mbps, hw, addr, queue,

Latest Articles

  • du - VAS Billing Subscriptions Hack/Scam MLPremiumSub Invascom Astromart Issues Complaint
  • Docker Swarm vs Kubernetes Comparison Guide
  • When is it time to leave your VPS/VDS Cloud Hosting Provider?
  • 2024 Buyer's Guide: How to Choose and Buy the Best VPS/VDS for Your Needs - Tips and Strategies
  • Postfix / sendmail config for DKIM, SPF and DMARC Tutorial Guide E-mail Delivery for and More HowTo
  • Install Grafana on Linux Debian Ubuntu Tutorial Guide
  • How To Completely Disable ufw in Linux Ubuntu Mint Debian
  • System has not been booted with systemd as init system (PID 1). Can't operate. Failed to talk to init daemon. Ubuntu Debian Linux Solution Cannot reboot
  • Mint Ubuntu Linux Gnome Showing Home Directory on Desktop instead of Desktop Directory
  • vi vim not doing code highlighting E319: Sorry, the command is not available in this version solution
  • Proxmox How To Rename Node Hostname Fix Solution
  • Linux how to get list of all timezones on system Ubuntu
  • Proxmox install issue cannot see the buttons or install wrong / bad resolution cannot see the entire screen problem solution
  • error: possibly undefined macro: AC_PROG_LIBTOOL If this token and others are legitimate, please use m4_pattern_allow. See the Autoconf documentation. solution
  • Can't exec "aclocal": No such file or directory at /usr/share/autoconf/Autom4te/ line 326. autoreconf: failed to run aclocal: No such file or directory solution
  • /bin/sh: autoreconf: command not found solution
  • glib-2.0 required to compile QEMU solution
  • How To Upgrade Debian 8,9,10 to Debian 12 Bookworm
  • Linux dhcp dhclient Mint Redhat Ubuntu Debian How To Use Local Domain DNS Server Instead of ISPs
  • Docker dockerd swarm high CPU usage cause solution