I just set up NIC bonding in Ubuntu 10.4, following these instructions, and I've got it working except for one problem: Every time I up or down a network device, or every time the system reboots, my routes go all to hell with eth0 and eth1 entries next to my bond0 entries. When the eth0 and eth1 entries show up, my connection is hosed and I have to go in via the maintenance IP to kill each route one at a time, leaving only bond0. Here's how I want my routes to look at all times:
Code: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.87.9.0 0.0.0.0 255.255.255.0 U 0 0 0 bond0 0.0.0.0 10.87.9.1 0.0.0.0 UG 100 0 0 bond0 Here's my /etc/network/interfaces:
I've run across a few threads in the archived forums regarding nic bonding. I could be wrong but I imagine most of the people that are looking at bonding are probably really wanting what they would get out of creating a LAG between their Ubuntu Server and a managed switch capable of LACP trunks. If you have two NICS on your ubuntu server and want 2gps throughput & failover check out the page on creating a LAG. Obviously this requires your switch to be compatible though. [URL]...
I have a question about bonded NICs and the switches they are connected to.
I have a server which needs to send a lot of data to another server quickly. Both have multiple GbE NICs. I understand what is required at the server end (I think) in that a pseudo-interface is created such as bond0 with the IP applied to that interface rather than eth0 and eth1.
My question relates to the connection between the servers, i.e. the switch. Is a specific type of switch required for this to work, as an IP will have 2 (or more) MAC addresses associated with it, or how does the switch decide to which port to route the traffic for the bond0 IP?
Also, will this only work when multiple connections are being made? What I mean is, will each individual TCP connection only use either the physical eth0 OR the physical eth1 interface, or can a single connection make use of the aggregated bandwidth, sending one packet to one physical interface and another to the other physical interface, using the bond0 IP as the destination?
What I am trying to work out is if I had a storage server connected to an application server and exporting storage using NFS or GlusterFS, would an aggregated link improve throughput?
I installed Fedora 10 and used dynamic IP first. Then I found out that bonding works with static ip only so I switched to static IP. But then, since I have 3 NIC cards, I configured bonding. I need help in these points:
1- steps to follow so that I switch from dynamic to static IP 2- steps to follow so that I configure bonding
I am seeing an issue on a few servers where it doesn't appear all NICs in the 802.3ad lag are all operating at the same level. A few of the servers have two bonds each with two NICs in each bond.I have two NFS servers that each has 1 bond with 3 NICs.All are RHEL5 x64 2.6.18.I think the reason why I see one interface dominating RX and another dominating TX is due to the xmit_hash_policy but there are three hosts that use this particular server for network traffic.That's 3 different physical mac addresses.The layer2 algorithm should be fine in that situation I would think.Would I just be better off with balance-rr?
is possible to use linux (especially slackware) to bond 2 (ethernet modems) adsl connections. For example if i have 2 connection of 24mbs download and 512 upload i will create achieve 48 mgps dowload and 1 mgps upload . something like that
adsl1 modem <------ eth1--- (slackbox router) --- eth0---> my server adsl2 modem <------ eth2----
I am confused as to what is going on with a particular box that I am working with.As you can see in the attached ifconfig print out, one eth port is basically only used as a output while the other is only really used as an input.I connect to the box via 10.20.40.104 for ssh,ftp,http,etc.I just want to know the name of what is happening (is it bonding,bridging?) and maybe some information about where it is configured.I looked in modprobe.conf
I have 4 DSL modems connected with 4 different ISP's my scenario is
a) My FC-2 machine with LAN IP=192.168.10.1 and Bond0 IP=192.168.1.1 b) Modem-A LAN IP= 192.168.1.2 , ext IP=xxx.xxx.xxx.xxx c) Modem-B LAN IP=192.168.1.3, ext IP=xxx.xxx.xxx.xxx d) Modem-C LAN IP=192.168.1.4, ext IP=xxx.xxx.xxx.xxx e) Modem-D LAN IP=192.168.1.5, ext IP=xxx.xxx.xxx.xxx
Modem-A, B, C, and D LAN connected with my FC-2 machine, and all 4 interfaces of my machine are in Bond0, Now please help me what default Gateway I should set in my FC-2 machine?>??? or I have to set 4 gateways in my machine?and will this configuration works?
As Xen is based on LINUX I'm hoping that someone out there could help me. I'm struggling with 802.3ad on a cisco switch (LACP). I'm going to be using this for my storage network on a dedicated cisco switch talking to a teamed pair of intel NIC's ISCSI storage. From the intel side I can set them to Link aggregated one team 2GB this is fine and my config shows active active on the NICs. How would I go about doing this from a LINUX box. In order to remove any bottle necks it needs to be active active from Xen. If I do a pif-forget uuid= on a XEN server I'm in complete control using Linux.
I instaled HPML350 G6 server. I would like to configure ethernet fail over. I got the script from my friend and configured and restarted the network services. I am getting the error message that ""Deprecated configure file /etc/modprobe.conf, all config files belonging to /etc/modprobe.d". System automatically assigning IP address also.
Upgraded a school network from FastEthernet to GigabitEthernet. Broke the network up into VLANs. Discovered that the router (Cisco 2821) could only route between the VLANs at around 400Mb/s. Tested out some layer three switches. They work very nicely, but are more than we need. So I started putting some spare equipment we had together as a Linux router.
Result: Underwhelmed. The machine has two Intel GigE interfaces. With the machine configured to route between two test VLANs I get about 855Mb/s with a single interface (all VLANs trunked over the single interface). That's about what I'd expect. Maybe a little low. With the two interfaces bonded, I get about the same.
For testing, I set up eight Windows machines, four on each VLAN. The Linux router is the only machine that can route between the two VLANs. I used Iperf to generate traffic and measure throughput between pairs of machines. Two machines on the same VLAN get about 300Mb/s between themselves. With the four machines organized into cross-VLAN pairs, I get about 855Mb/s total throughput on a single interface and very slightly more with two interfaces bonded.
The Linux router has an Intel Xeon E5506 CPU running at 2.13GHz and these are Intel GigE interfaces (built-in). I would expect to get a large boost by adding the second interface. I've confirmed that bonding is working (by pulling either of the cables and watching everything continue to function).
I am running RHEL 5.3 on a blade server w/ 2 NICs that are bonded. I have 2 VLANs that I am trying to configure. I have created the network-scripts ifcfg-bond0.<vlan#>. I can ping the device but the gateway won't ping. I am in console mode so cutting and pasting output doesn't work.
I would have presumed that you have 2 x agregates giving 2 x 2gig links and the resilience is that if one nic fails in either bond you carry on with that bond running a 1 gig link until fixed. But, our architect wants to have an active/passive (mode 1) accross bond2 and bond3. I have setup tons of mode 1 bonds and mode 4 bonds but never tried bonding 2 x mode 4 bonds!
For some reason the unmentioned former disto, I could not get wireless networking under this configuration and a simple wlan0 interface, in case either cable were to sever it would retain a wireless. The bonding was done for speed, and I believe I can't get faster than mode=4.
We run redundant switches that two nic's on each server connect to. We also run bonding on our servers. Because we have two switches, we can't run lacp or anything. If a switch goes into a crashed state where it doesn't pass traffic but still provides link, bonding thinks the interface is still up and thus will still send traffic through it. Does anybody know a better way to configure the fail over of the interface? This would be a similar situation to somebody using a media converter.
I'm trying to configure my Red Hat server to use port bonding for two interfaces going to a switch, and I followed this redhat guide [URL]...-redhat-server but when I attempt to bring the bond up, it simply uses mode 0 (round robin) instead of mode 1, which I specifically configured in /etc/modprobe.conf by doing
Code: options bond0 mode=1
Any ideas why the bond would continue to use round robin when I set it to active-backup?
I just setup a new LAMP server (CentOS 5.5 x86_64) box with channel bonding on NetXtreme II BCM5709 Gigabit Ethernet (IBM x3650 M3). The problem is I wasn't able to connect to this server when I'm in different VLAN's. This server also unable to ping different VLAN's. But everything works fine when I transact in the same VLAN.Here's the config:
I'm trying to do some network bonding in RHEL 5.4, and I'm not seeing the results I expect. I have set up a number of servers using mode 1 with no issue, but I need to configure a new server differently. bond1 is created from eth1 & eth2 bonded together using mode 802.3ad with layer2 lacp bond0 is created from bond1 & eth3 using mode active-backup with bond1 as the primary interface When I bring the network services up, all traffic is passing over eth3 instead of bond1, which is the primary. Below are the modprobe.conf * ifcfg-* files involved.
My modprobe.conf: alias eth0 e1000 alias eth1 e1000 alias eth2 e1000 alias eth3 e1000 alias scsi_hostadapter mptbase alias scsi_hostadapter1 mptsas alias scsi_hostadapter2 qla2xxx [Code].....
I have two servers that have been running link aggregation and VLAN trunking for years. I've installed larger drives in one and done a fresh minimal install. In spite of configuring as before the network will not come up. I did discover that a minimal install will not install vconfig which was causing one particular error when the VLANs tried to come up and this has been solved by installing vconfig. However, at this point the problem appears to be not bonding eth0 and eth1. /proc/net/bonding/bond0 indicates: "bond bond0 has no active aggregator"
If I put the old drives back in and boot the old OS everything starts up fine. I was even able to get it working with:
modprobe bonding ip link set dev bond0 up ifenslave bond0 eth0 eth1 service network restart
however, it does not survive a reboot. Is the a bug in the 2.6.18-194.11.4.el5PAE kernel or the network scripts? The the previous kernel I am running is 2.6.18-128.1.10.el5PAE For what its worth, these are the configurations:
On my computer i have 1 network card and 1 USB modem for Mobile Internet. I can access internet using both devices, but when i am using the network card i am unable to listen to radio stations because the radio streaming is not allowed at the work place. So, i am triyng to find a solution and the solution seems to be to use the bonding module on linux. But i have a few question:
1) i don't know if it is possible to use this module for 2 such different devices 2) i don't know if, even enabling bonding, i will be able to "select" the right path in order to reach the radio streaming , or if the right path is selected automatically. When i connect to the radio station using the network card, it opens the connection, but it closes it imediately and i am not sure that the kernel is clever enough to use the other connection in this case.
How to bond NICs in Ubuntu server 10.04/10.10 correctly.
I installed ifenslave-2.6 on a fresh install. Then I tried going into the /etc/network/interfaces file and adding a new entry for bond0 to the file and adding bond-mode 6, bond-miimon 100 and slaves eth1 eth2 to the end of the bond0 config.
That did not work, it keep telling me it cannot bring up the interface even after a reboot it still does not work.
I also followed some tutorial I found online about editing /etc/modprobe.d/aliases
That did not work really well either, the interface came up but had no slaves.
Can anyone give a clue or a current method to bond interfaces in the 10.04/10.10
I'm attempting to install a new OpenSUSE 11.2 system (64 bit) onto an Intel-based server. I'm having no success in bonding any of my network interfaces into a single 802.ad interface. I'm fairly certain the problem is not the switch as I'm using the same switch (Foundry FastIron SuperX) on which I have an older SLES10-based system that runs with 4 NIC ports bonded together in this fashion (as well as a number of W2K3 systems using SLA). I set up the older SLES10 box by editing /etc/sysconfig/network/ifcfg-blah files at the command-line as bonding was not supported by YAST in that version.
Now I understood, perhaps incorrectly, that YAST in OpenSUSE 11.2 permits bonding. So I bowed to the inevitable and from the GUI ran Yast->Network Settings and worked through all the tabs. I faithfully set up all the NIC ports to be switched off for bonding use then added a new Bond0 interface to which slave NIC ports are bonded. Default gateway was set to point to my router via the new Bond0 interface. I ensure that the firewall was off and that the relevant interfaces all pointed to the Firewall Disabled zone. After setting up as required, I rebooted the system and ran a test ping to a few addresses on the network. No joy. I checked the routing table via ip route then netstat -nr and then route all on its own. The routing tables looked as expected each time.
Getting down to first principals I ran ifconfig and again everything looked fine. I checked via modprobe -l bonding just in case there was something fundamental missing. Nope. I assumed that YAST hadn't worked as advertised and jumped online to grep for more info. Firstly I see that the HOWTO is out of date (Bonded Interfaces With Optional VLAN - openSUSE). I also noted that the bonding module option mode no longer supports numbers but upon looking at my ifcfg-bond0 file it contained mode=802.3ad rather than mode=4 so that wasn't it.
On comparison with SLES10 I did notice that config files of the ifcfg-eth-id: XX-XX-XX-XX-XX type were no longer used in Open SUSE 11.2 with only ifcfg-ethX config files. I examined those and also noted that the _nm_name and UNIQUE options were also no longer used. No need for lspci (or the Hardware control panel) when setting up bonding? Example of ifcfg-bond0 file contents (edited for brevity):
I have an Asus R1F laptop with a build in wireless card. I also have 2 USB dongles that we used to use old desktop machines. All of the cards work fine in Ubuntu. I'm running Ubuntu 10.10, 2.6.35-22-generic-pae.
The problem is that I have really bad signal on my side of the house and the connection is dropped quite often. I've tried just connecting on the devices at the same time, they all connect fine but nothing actually works. I read about bonding interface cards on this blog and that would solve all my problems - if a usb dongle could act as a backup for when the normal connection is dropped and while it is reconnecting.
I tried what was written and also did some Googling, but every way that I try seems to work fine until 2 wireless devices are bonded. When that happens they both disconnect and reconnect like crazy. This happens both with and without network manager running.
Build in card[wlan0]: Intel PRO/Wireless 3945ABG USB[wlan1]: Cisco LinkSys Compact Wireless-G USB Adapter USB[wlan2]: Realtek RTL 818713 WLAN Adapter I load the module: Code: modprobe bonding mode=1 miimon=100 downdelay=200 updelay=200