I would have presumed that you have 2 x agregates giving 2 x 2gig links and the resilience is that if one nic fails in either bond you carry on with that bond running a 1 gig link until fixed. But, our architect wants to have an active/passive (mode 1) accross bond2 and bond3. I have setup tons of mode 1 bonds and mode 4 bonds but never tried bonding 2 x mode 4 bonds!
For some reason the unmentioned former disto, I could not get wireless networking under this configuration and a simple wlan0 interface, in case either cable were to sever it would retain a wireless. The bonding was done for speed, and I believe I can't get faster than mode=4.
I'm setting up link aggregation. When I use two ports of the same nic everything is working fine. When I use two ports of different nics (broadcom and intel) it doesn't work. Does anyone know of limitations using two different nics for link aggregation?
I have two servers that have been running link aggregation and VLAN trunking for years. I've installed larger drives in one and done a fresh minimal install. In spite of configuring as before the network will not come up. I did discover that a minimal install will not install vconfig which was causing one particular error when the VLANs tried to come up and this has been solved by installing vconfig. However, at this point the problem appears to be not bonding eth0 and eth1. /proc/net/bonding/bond0 indicates: "bond bond0 has no active aggregator"
If I put the old drives back in and boot the old OS everything starts up fine. I was even able to get it working with:
modprobe bonding ip link set dev bond0 up ifenslave bond0 eth0 eth1 service network restart
however, it does not survive a reboot. Is the a bug in the 2.6.18-194.11.4.el5PAE kernel or the network scripts? The the previous kernel I am running is 2.6.18-128.1.10.el5PAE For what its worth, these are the configurations:
I'm looking for a way (kernel patches, configuration, etc) to bond multiple network interfaces together but for limited purposes. Here's the setup. Machines A, B, C, and D each have 4 NICs, each of which are on separate unmanaged switches. The connections are made in a corresponding way. e.g. eth0 of each machine are connected via switch 0, eth1 are connected via switch 1, etc. There are also other machines which have only one NIC and are connected to switch 0 only. All NICs for A/B/C/D and the switches are gigabit speed. The remaining machines have a low traffic level. Machines A/B/C/D need the extended bandwidth. And this bandwidth need usually involves only one connection at a time.
E.g. machine A transferring files to machine C with no other traffic going on. The speed need is to cut the transfer times from several hours to few hours (such as 8 hours to 2 hours). Transfers of up to a few terabytes at a time are involved. IEEE 802.1AX won't accomplish this. It requires special support from a single switch that all connection go to (raising costs and reducing reliability). Also, from technical details of 802.1AX, it appears that a decision process is made for which traffic goes over which physical link based on destination information. It's unclear what impact this will have, but it looks like at least a single TCP connection cannot use all physical links.
And possibly all traffic from host A to host B is limited to a physical link (not any better than a round robin of crossover cables). What I am looking for is something that works entirely on an end-to-end basis within a LAN. If it works at the link layer, that could be OK as long as it doesn't have the limitations of 802.1AX. Working at the IP layer would be OK, too (as I can already envision the logic of how to make that work). This might be an experimental patch to the Linux kernel if anyone has tried it. I have not dug into kernel source to see what might be in there, yet, but will eventually do that if there isn't a patch already available.
We run redundant switches that two nic's on each server connect to. We also run bonding on our servers. Because we have two switches, we can't run lacp or anything. If a switch goes into a crashed state where it doesn't pass traffic but still provides link, bonding thinks the interface is still up and thus will still send traffic through it. Does anybody know a better way to configure the fail over of the interface? This would be a similar situation to somebody using a media converter.
I am trying to use a shared bzip2 library in a program I'm writing. For the life of me I can't figure out the correct gcc switch to link it in. I've tried pkg-config, but i don't know the name of the .pc file. Is there any rule for these things?
Got 4 onbards nics and would like to bond them into a single 4GB pipe. It is running a number of VM's. I can set up a link in my switch but I am stumped on ubuntu side. I think I have to make a file with entries but not sure where or how.
I have several servers that we have bonded some NICs for rundancy and they will of course switch from primary to secondary NIC if connection state is lost to the switches they are physically connected to, but is there any way to be able to sense upstream connectivity (off switch) for each NIC and failover even though the NIC itself has a connection state to the switch it is plugged into? We are using Dell managed switches on VLANs with trunking.
I am trying to find the dyanmic heap size and stack size of a running process in rhel5.5 and rhel6.I read that the 23rd parameter in the file /proc/pid/stat gives the heap size.Can you elaborate more on this.Also is there any other way to do this?
I've run across a few threads in the archived forums regarding nic bonding. I could be wrong but I imagine most of the people that are looking at bonding are probably really wanting what they would get out of creating a LAG between their Ubuntu Server and a managed switch capable of LACP trunks. If you have two NICS on your ubuntu server and want 2gps throughput & failover check out the page on creating a LAG. Obviously this requires your switch to be compatible though. [URL]...
I have a problem where I'm using Ubuntu linux to mount a Windows Vista machine's USB drive and access it on the web using Apache. I did have the USB drive plugged into the Linux machine directly and that was working via the web. FollowSymLinks is on in httpd.conf
The mount works and I can see the files (see above) from my regular linux user account. If I make a test file in /mnt and soft link to that, I can see it on the web. So it's just the mount to the vista machine that seems to be a problem. It's supposed to be a simple read-only mount and the apache login should (I think) be able to see the same generic root access permissions.
log from apache: [Mon Apr 26 20:39:42 2010] [error] [client 18.104.22.168] Symbolic link not allowed or link target not accessible: /home/user1/pub_html/Music, referer: https://xx.xx.xx/~user1/music.html
The credentials have a login and password that matches a special read-only account on Vista. I can see the files on the system from Linux, but not via the web. As mentioned above, a different link to the same /mnt area works fine via the web. I've tried several different mount options with no success.
I want to restrict the Visitors to my Webserver whom i want to give access But the persons whom i want to give access. have Dynamic IP. I want to use DynDNS and update IP address of person. Based on the Hostname Pointing to Dynamic address of person.
I installed Fedora 10 and used dynamic IP first. Then I found out that bonding works with static ip only so I switched to static IP. But then, since I have 3 NIC cards, I configured bonding. I need help in these points:
1- steps to follow so that I switch from dynamic to static IP 2- steps to follow so that I configure bonding
I am seeing an issue on a few servers where it doesn't appear all NICs in the 802.3ad lag are all operating at the same level. A few of the servers have two bonds each with two NICs in each bond.I have two NFS servers that each has 1 bond with 3 NICs.All are RHEL5 x64 2.6.18.I think the reason why I see one interface dominating RX and another dominating TX is due to the xmit_hash_policy but there are three hosts that use this particular server for network traffic.That's 3 different physical mac addresses.The layer2 algorithm should be fine in that situation I would think.Would I just be better off with balance-rr?
is possible to use linux (especially slackware) to bond 2 (ethernet modems) adsl connections. For example if i have 2 connection of 24mbs download and 512 upload i will create achieve 48 mgps dowload and 1 mgps upload . something like that
adsl1 modem <------ eth1--- (slackbox router) --- eth0---> my server adsl2 modem <------ eth2----
I am confused as to what is going on with a particular box that I am working with.As you can see in the attached ifconfig print out, one eth port is basically only used as a output while the other is only really used as an input.I connect to the box via 10.20.40.104 for ssh,ftp,http,etc.I just want to know the name of what is happening (is it bonding,bridging?) and maybe some information about where it is configured.I looked in modprobe.conf
I have a question about bonded NICs and the switches they are connected to.
I have a server which needs to send a lot of data to another server quickly. Both have multiple GbE NICs. I understand what is required at the server end (I think) in that a pseudo-interface is created such as bond0 with the IP applied to that interface rather than eth0 and eth1.
My question relates to the connection between the servers, i.e. the switch. Is a specific type of switch required for this to work, as an IP will have 2 (or more) MAC addresses associated with it, or how does the switch decide to which port to route the traffic for the bond0 IP?
Also, will this only work when multiple connections are being made? What I mean is, will each individual TCP connection only use either the physical eth0 OR the physical eth1 interface, or can a single connection make use of the aggregated bandwidth, sending one packet to one physical interface and another to the other physical interface, using the bond0 IP as the destination?
What I am trying to work out is if I had a storage server connected to an application server and exporting storage using NFS or GlusterFS, would an aggregated link improve throughput?
I have 4 DSL modems connected with 4 different ISP's my scenario is
a) My FC-2 machine with LAN IP=192.168.10.1 and Bond0 IP=192.168.1.1 b) Modem-A LAN IP= 192.168.1.2 , ext IP=xxx.xxx.xxx.xxx c) Modem-B LAN IP=192.168.1.3, ext IP=xxx.xxx.xxx.xxx d) Modem-C LAN IP=192.168.1.4, ext IP=xxx.xxx.xxx.xxx e) Modem-D LAN IP=192.168.1.5, ext IP=xxx.xxx.xxx.xxx
Modem-A, B, C, and D LAN connected with my FC-2 machine, and all 4 interfaces of my machine are in Bond0, Now please help me what default Gateway I should set in my FC-2 machine?>??? or I have to set 4 gateways in my machine?and will this configuration works?
As Xen is based on LINUX I'm hoping that someone out there could help me. I'm struggling with 802.3ad on a cisco switch (LACP). I'm going to be using this for my storage network on a dedicated cisco switch talking to a teamed pair of intel NIC's ISCSI storage. From the intel side I can set them to Link aggregated one team 2GB this is fine and my config shows active active on the NICs. How would I go about doing this from a LINUX box. In order to remove any bottle necks it needs to be active active from Xen. If I do a pif-forget uuid= on a XEN server I'm in complete control using Linux.
I instaled HPML350 G6 server. I would like to configure ethernet fail over. I got the script from my friend and configured and restarted the network services. I am getting the error message that ""Deprecated configure file /etc/modprobe.conf, all config files belonging to /etc/modprobe.d". System automatically assigning IP address also.
I just set up NIC bonding in Ubuntu 10.4, following these instructions, and I've got it working except for one problem: Every time I up or down a network device, or every time the system reboots, my routes go all to hell with eth0 and eth1 entries next to my bond0 entries. When the eth0 and eth1 entries show up, my connection is hosed and I have to go in via the maintenance IP to kill each route one at a time, leaving only bond0. Here's how I want my routes to look at all times:
Code: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.87.9.0 0.0.0.0 255.255.255.0 U 0 0 0 bond0 0.0.0.0 10.87.9.1 0.0.0.0 UG 100 0 0 bond0 Here's my /etc/network/interfaces:
Upgraded a school network from FastEthernet to GigabitEthernet. Broke the network up into VLANs. Discovered that the router (Cisco 2821) could only route between the VLANs at around 400Mb/s. Tested out some layer three switches. They work very nicely, but are more than we need. So I started putting some spare equipment we had together as a Linux router.
Result: Underwhelmed. The machine has two Intel GigE interfaces. With the machine configured to route between two test VLANs I get about 855Mb/s with a single interface (all VLANs trunked over the single interface). That's about what I'd expect. Maybe a little low. With the two interfaces bonded, I get about the same.
For testing, I set up eight Windows machines, four on each VLAN. The Linux router is the only machine that can route between the two VLANs. I used Iperf to generate traffic and measure throughput between pairs of machines. Two machines on the same VLAN get about 300Mb/s between themselves. With the four machines organized into cross-VLAN pairs, I get about 855Mb/s total throughput on a single interface and very slightly more with two interfaces bonded.
The Linux router has an Intel Xeon E5506 CPU running at 2.13GHz and these are Intel GigE interfaces (built-in). I would expect to get a large boost by adding the second interface. I've confirmed that bonding is working (by pulling either of the cables and watching everything continue to function).