I am supposed to set a system such that, a single desktop screen can be seen in multiple proctors. the situation is something like, we have a seminar and the demo will be shown form a single computer.
I have an old pentium 3 computer that has ~7 NICs installed. These NICs are attached to modems and other networking equipment. According to the linux ping page on computerhope.com, it seems that one could send a ping from a certain specified NIC. How would one go about this?
I have a CentOS 5.3 box with three network interfaces in it. Each interface is attached to a separate VLAN and I want traffic to stay on each network segment.What I can�t figure out is why I cannot get each interface to have its own gateway and everything gets sent through the default gateway.The basically takes my possible 3Gb total bandwidth and throws it down a single 1Gb pipe.Then on top of that, if I take down the interface (ifdown) that has the current default gateway,I loose contact to the other two interfaces.When I look at the routes, each one of the interfaces shows the gw as 0.0.0.0 and defers to the default route. So I delete the route and try to add a new route with:
[root@testsan ~]# ip route add 10.1.15.0/24 via 10.1.15.1 dev eth2
I have one router, a linksys. It allows wireless and wired connections, as is normal. I have two XP machinesby wire to the router and three linux machines connected wirelessly. The XP machines both have IP addresses beginning with 192.168. while my three linux machines have IP addresses that all begin with 172. None of the machines is connected with a static IP address. All are automatic DHCP.I am told that the above scenario makes no sense. However, such is what I have so, I trust, the theory and the fact do not gel. I would not care except that I cannot see - using the nautilus network servers program, all of the XP computers with some of my linux boxes.
I need the following:Running XAMMP on Ubuntu server with one NIC.Only the webserver has to be available on multiple IP addresses.What i have is 4 devices who communicate with server database servers.
Device 1 = Mysql on IP 192.168.0.100 Device 2 = Mysql on IP 192.168.0.101 Device 3 = Mysql on IP 192.168.0.102
I'm trying to setup a CentOS 5.5 with 1 NIC to have several IP addresses on same subnet, each with different MAC addresses. I tried macvlan and multimac but both gives same MAC address (the one of physical NIC) for all IP addresses configured in ARP table on remote hosts. Is it possible to send the 'right' MAC address in ARP requests of corresponding IP address?
i have 3 external IPs, assigned on eth0, eth0:1 and eth0:2.I have a game bot that connects to a game network, but the game network only accepts a limit of 5 connection from same ip. The game network has multiple IP addresses. (e.g. game.com resolves to 1.1.1.1, 1.1.1.2, 1.1.1.3 etc)How do i specify that certain bot connects via eth0:1 and eth0:2? Currently all bots are using eth0's ip
I've been trying to set multiple IP my Fedora 14 but nothing seems to work. Upon browsing the net, I found there are two ways for it. One is eth0:0~eth0:n nd another is eth0-range0. All are configs under network-scripts. But neither of them worked for me. Even grabbing a working example from my live server doesn't do the trick (though the server is a CentOS 5.5).
Currently using eth0-range0 ONBOOT=yes IPADDR_START=192.168.1.127
I have an application running inside our lan on server 192.168.0.1:8080. I have configured gateway firewall to direct all traffic on port 80 to port 8080 on 192.168.0.1. So I can access the application from outside lan. Now the problem starts when the application redirects the traffic to another server 192.168.0.2 according to the input of the users. How can I configure the whole system so that I can access the application running on second servers also?
how to block multiples ports from my internal lan going out to the internet?, I want to prevent LAN user's in accessing this kind of ports for example port from 1500-10000.
im making a personal firewall script, im just testing it for just curiositie's sake.
will i use the foreward chain policy?? to drop all packets, like port 1500:10000 note '#' stands for root
#iptables -A FORWARD -s 192.168.0.1/24 -p tcp --dport 1500:10000 -j DROP #iptables -A FORWARD -s 192.168.0.1/24 -p udp --dport 1500:10000 -j DROP
I would like a basic firewall on my netbook and first attempted this by using firestarter as i have no experience in writing IPTABLES rules from first principle and to be honest the syntax looks horrific! the problem with firestarted is that when i selected WLAN0 to be the internet connected port everything worked fine until i connected to a VPN at which nothing would work (the only error i got was when pinging an IP address when i got sendmsg not permitted) my normal setup is this.... normally im connected via WLAN0 to the internet. but one one particular network i must activate the VPN to use anything, this creates another interface tun0. both wlan0 and tun0 will be assigned an ip address but only the tun0 will do anything (the wlan0 one is configured by the network to just allow traffic to the vpn gateway and nothing else) what i really need is some way of creating a basic firewall (drop all incomming except ports i specify) that lives on wlan0 unless tun0 is active in which case it moves to tun0
I have 3 servers interconnected with IPs 192.168.150.1-3. First two has internet connection and third first server uses them as gateways. After googling and reading howtos I managed to get it working: The firewall for ssh on first server is defined
And on third route is defined like this: Code: ip route add default scope global nexthop via 192.168.150.1 dev eth0 nexthop via 192.168.150.2 dev eth0
It works, but the problem is that connections on third server are shown that their connected from 192.168.150.1 or 192.168.150.2. Are there is any way to keep original connection source address, when connecting to 192.168.150.3?
got the problem with multiple ssh-tunnels. The case is:I have 1 server running Slackware 13.0 with external ip and few windows-machines. inetd daemon is running on the server, my script is listening on port 2345. I create multiple ssh-tunnels from client machines to the 2345 port of the server in order to initiate script execution. For debugging reasons the script simply echoes the incoming information to the connection initiator. This is how the connection is initiated.
Code: ssh <user>@<my_server_IP> -L 5555:<my_server_IP>:2345 echo "hello"|nc -vn 127.0.0.1 5555 (a port on a client-machine, that is forwarded to <my_server_IP>:2345) gives "hello" output. Code: client1 port 5555|----ssh-tunnel---- eth0|-------server---------------|
[Code]...
The problem is that i need my script to execute some commands (registry parsing) on a remote client machine with winexe utility. So I need to identify each tunnel or each connection in order to execute the command on each of the client workstations. I need at least to have access to some ID of the ssh session or a tunnel, through which a certain connection was initiated and then use it to create a reverse tunnel or just connect to certain client via that client`s tunnel.
I have a wide area network with 7 CentOS servers running Bind and 1 Windows 2003 server. All 8 of these servers handle DHCP and DNS at their respective locations. At each site I can ping computer.site.company I'd like to be able to resolve the dns names from site to site. So from site1 I would like to be able to ping computer.site2.company and get a response.
I've searched on issues of multiple DHCP servers in one LAN. Just about everything is Windows or Novell based stuff. Also, the typically asked scenario is 2 or more DHCP servers for failover purpose (one goes down, the other takes over).
What I want to do with DHCP is different. My purpose would best be described as "administrative separation". Basically, if a given MAC address is configured on a specific DHCP server, that server should be the one to answer and not the other. The problem with that is that we also need a default to handle unknown MACs. So the DHCP server without the MAC configured would be answering, anyway, even if it is the only one configured to do global leasing. Timing would then be the determining factor.
The purpose is to set up a bunch of PXE network booting using program generated DHCP configuration. This server won't always be up, so it can't be used for general purpose. The DHCP server for general purpose is part of a wireless system, and it is configured by GUI and is impractical for the programmed PXE booting.
how to make these work together with everything being on the same LAN segment?
Is there a way to do multiple interfaces in tcpdump? I have found that when using "-i any", not all packets are captured (compared to "-i eth0" on a machine with only one interface). I need to monitor traffic on some machines with as many as 6 interfaces, and get these packets that "-i any" misses. When I give the "-i" option multiple times, it seems to only use the last one.
Current: Workstation --> Wireless Router --> Cablemodem --> InternetsIPs:192.168.2.1 --> 192.168.2.5>192.168.1.5 --> 192.168.1.1>{someIP} --> InternetsSo my first link has a Class C of 192.168.2.0. I want to add a different class C also to that firstink of 192.168.3.0. The reason is my LAN traffic will be on 192.168.2.0 and traffic will be on 192.168.3.0.The wireless router can be only set up for one LAN IP, which is 192.168.2.5. Is there some way to set up a vpn or nat or bridge or something else so I can run two class C's to the router and it will pass them along? Router is the Netgear WNDR3700.
I recently purchased a block of 5 IPs from Comcast. I have a computer running Arch Linux connected to the Comcast gateway they gave me. On my connected computer I have 2 Windows XP virtual machines running. Now I was wondering how can I make each of those virtual machines have a different public IP, because currently the only thing I can get working is have the computer and both virtual machines sharing the same public IP.
In my environment we are running DHCP on a Windows 2003 r2 server. This DHCP server also is used with Symantec's 3COM PXE for the desktops. So the desktop's can PXE boot into Symantec Ghost and re-image the PC's with a Ghost (GHO) file. This DHCP server is responsible for assigning IP addresses for all desktops on the network.
We also have several branch offices which this DHCP server provides IP addresses to. These branch offices are on a separate network so I believe this is possible. Each branch office is running a Linux server so I would like to use Clonezilla and allow users in these offices to PXE boot to the local Linux server to run Clonezilla and re-image their notebook/desktops with a specified image that is on the local Linux server in each office. My only concern is the use of the same DHCP server. Is this possible?
Another project I am working on is setting up LTSP with openSUSE in which I want to have about 10 or 15 diskless PCs boot up and retrieve the LTSP image but this would also use the same DHCP server and is on the same network as the regular desktops that use the Symantec 3COM PXE service so is this even possible? If not, any recommendation on how I could get it to work? Could proxyDHCP work or MAC filtering or even a seperate VLAN?
i am working at a place that has 2 physical web servers yadayada1 and yadayada2 but only one public ip address i can use dyndns to register 2 dynamic domains on the same ip address how can i get yadayadayada1.dyndns.org to route to yadayada1 and yadayadayada2.dyndns.org to route to yadayada2 ?
My Linux gateway has multiple address to internet: eth0 = 76.148.200.3 eth0:0 = 76.148.200.4 eth0:1 = 76.148.200.5 and it's own gateway which is 76.148.200.2 (probably not relevant) and I also have which is not internet, but local: eth0:2 = 192.168.0.1 netmask 255.255.255.0
They all work fine and tested. Now I am sharing the internet through eth0 (76.148.200.3) to 192.168.0.1/24 and that's working fine. The script I use to do that is here...
Code: #!/bin/sh echo 1 >/proc/sys/net/ipv4/ip_forward echo 1 >/proc/sys/net/ipv4/ip_dynaddr iptables -t nat --flush iptables -A FORWARD -i eth0 -d 192.168.0.1/24 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -s 192.168.0.1/24 -o eth0 -j ACCEPT iptables -A FORWARD -j LOG iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Now all I want to change in the script is to share it through 76.148.200.4 (eth0:1) instead of what is already sharing through 76.148.200.3 (eth0). I am sure this is easy but can't work it out and iptables doesn't accept 'aliases'. How I can do this by modifying this script?
I'm trying to connect one computer to two others in an ad-hoc infrastructure.
[computer 1] ---- [computer 2] ---- [computer 3]
computer 2 is running Linux and has a single NIC wlan0. I want to it to connect to both computer 1 and computer 3 so each computer can talk to the other. No switch is available so it needs to be an ad-hoc setup.
I have run into a problem that I've tracked down to being a conflict between the "Upstart" init system, and how it handles multiple (alias) IP addresses per physical interface. The summary of the problem is that the interfaces are being configured in the background in parallel with the starting of daemons. One "feature" of this (apparently intended for pluggable devices that would add or remove an interface) is that the network daemons are restarted each time an interface is added (and presumably deleted). But this is a disaster when applied to alias IP addresses.
I first saw the effects of this when during booting Ubuntu Server, the screen showed a message about OpenSSH daemon being restarted ... several times a few seconds apart each. At the time I didn't know what was causing that, but didn't worry because it ultimately was running when I needed it.
But now that I am deploying these servers for specific duty with many IP addresses per system (per network interface), the symptoms are becoming serious, and I need a solution.
1. The IP addresses are coming online too slowly. Apparently the time it takes to restart each daemon is being added to each address being configured.
2. It appears to be disrupting some daemons sometimes. Occaisionally, some daemon just ends up being hung somewhere, or dies. Too many restarts.
3. Sometimes few or even no alias addresses get configured. This might be due to a daemon getting hung, and the whole sequence just not finishing.
4. The "nsd" name server as packaged by Ubuntu doesn't deal well with this at all. It needs all its IP addresses to be up when it starts, or else it won't start. The Ubuntu package of it doesn't including any if-up script at all, although I'm not sure that would do any good.
What I need is a way to configure all these alias IP addresses so they are all configured immediately when the point in time is reached to bring up network interfaces for the first time. These are all static, and all are aliases on ethernet NIC cards plugged into PCIe cards, or integrated in the mainboard. None of them are pluggables. I did run a manual test of "ifconfig" in a loop configuring 2540 alias IP address on eth0 and it only took 2 seconds (no if-up triggers or daemon restarts here). So I know it's fast if nothing else is done between these steps.
Even for pluggable physical interfaces, I see no reason to even try to step through every alias (if it has aliases) with a daemon restart. If an alias IP address is added on later, then I can understand doing it. But if you have a list of 100 aliases for a physical interface, they really should all be done ... or at least attempted ... at once, and do any triggers needed after that.
So, how can I configure or modify Ubuntu Server 9.10 to do that?
I have each alias listed in the "/etc/network/interfaces" file with a separate "auto" and "iface" section for each one, with sequential sub-interface numbers appended to the interface name. I tried it without those sections (e.g. just "address" and other items in sequence) and that prevents the system from even coming up (bootable CD to the rescue to undo that). At least cntrl-alt-del did reboot it.
I tried to attach the /etc/network/interfaces file, but I don't know if it worked because I see no confirmations about it. if it didn't attach and you need to see it, say so, and I'll just paste it in a followup.
I recently signed up for the IPREDator service, and one limitation I've found is that having 2 computers I cannot have both of them connected at the same time. So, I decided to have 1 of them connected (my 'server'), and have the other route all of its traffic over the 'servers' VPN.
My server connects to the IPREDator VPN on interface ppp0. My server will allocate ppp1 for the VPN from my client. My server's LAN address is 192.168.1.1. My client's LAN address is 192.168.1.2.
On the server perform the following...
Code:
sudo apt-get install pptpd
Modify /etc/pptpd.conf to have the following options:
We need to restart the PPTPD and Networking on the server (I would just restart the server). Make sure you connect to the IPREDator VPN on the server first (otherwise ppp0 won't be assigned to it).Click Network Manager, VPN Connections click on your new VPN.
You should be prompted for your password (default in this guide is just 'password'). You should now be connected via PPTP to your server, which is in turn connected to the IPREDator VPN, and all of your traffic should be tunnelled as such.I've probably made a ton of mistakes in this guide, and there's no doubt a hundred different ways to make this more elegant.
I found one strange issue with ubuntu, can anyone suggest if its a bug or as designed? If I have two nameservers in my resolv.conf, ubuntu only checks the first (and receives a not found reply from there) and never goes to the next two nameservers. This behaviour is very different from windows or other linux systems.