Server :: Local Dns Server With No Internet Access
Oct 1, 2009
In my computer networking class I have the option of doing a project where I set up a dns server for our classroom network. The problem is that this network is totally separate from the school network and we aren't allowed to connect it to the internet. I want all the machines to ping each other by name instead of ip using dns instead of host files on all 20 computers. I read on a site somewhere that you cannot do this because the dns queries will always go to the root servers. Is this correct? Is there some way I can do this using dns? The machine in question is using Ubuntu 9.04.
i would like to setup one of my old pcs as a file server and internet gateway; we are living in a large building shared with 40 others. the ubuntu box would be the one connected to the internet via ethernet and sharing connexion via wifi. i haven't started yet - presently, i am doing the ground work and reading before to start i understand sharing the internet is relatively simple and can be done from the GUI
What we would like is slightly like BT Fon or BT Open Zone in the UK: you can hook on a free network but in order to access outside (internet: email, web, ftp, etc) you need to login login would help us monitor fair usage. I imagine something with username and password for each user would do: as we are a few in the same building not everyone is actually paying for the connection and we don't want to end up with rather large excess bills. So the ones who are paying access both files and internet; those who dont just have access to the files on the local server.
Do i need Ubuntu server to set this up? What hardware would be ideal - given we are all far from rich but willing to have a nice setup
It would be great if you could share some knowledge around the topic and eventually provide some tutorial; also any heads-up on the hardware side would be great! (signal booster, etc - there's 3 floors and 3 buildings)
I have a linux box (fedora) with two ethernet cards eth1 and eth2. On eth1 I successfully configured a PPPOE internet connection. Such that from the server I can browse the internet. On eth2 I wired it to a wireless router essentially to provide the wireless cloud. On eth2 I also configured dhcp, such that the Linux box is both PPPOE and DHCP server.However my clients on the LAN cannot access the Internet.
On passing the routing command I get Destination Gateway Iface 196.44.x.y 0.0.0.0 ppp0 192.168.1.0 0.0.0.0 eth2 (my subnet) 0.0.0.0 0.0.0.0 ppp0.
The router (functioning as a wireless access point mainly) has a fixed IP address of 192.168.1.2 and eth2 has IP address 192.168.1.1. The dhcp file running on Linux has been set with option router (Gateway) 192.168.1.1. I cannot figure out how to correctly set the routing table such that my clients on wireless can access the internet cloud. I googled and googled but no solid solution. Any suggestions?
I have a file 'my_file.txt' stored on 'myserver1.col.edu' Now, I am using a different server 'myserver2.col.edu' to do some work and I want to access 'my_file.txt' on 'myserver1.col.edu' to read (possibly edit) WITHOUT physically copying the entire file across. Is there a way to do this - perhaps through ssh?
I have a problem with my slackware 13.1 that is that i can�t access it outside my local network. It�s running behind a router and i have activated the DMZ to my slackware computer i can access the web with my slackware computer but i can�t get access to it outside my LAN.
I have dynamic internet connection. In my network one machine run glassfish server in port 8080 (it IP address 192.168.1.3). If I type http://192.168.1.3:8080/ that load my glassfish server page. What I want is using dynamic IP like http://<DynamicIP>:8080/ load my glassfish server page. How can I do it ? My router is Dlink GLB-802C > I haven't good knowledge about network.
I'm trying to set up a small Intranet system to run OpenERP or similar using browser-based clients. I have an Ubuntu machine running 10.04 desktop edition to act as a temporary/testing server until we set up a proper, dedicated machine with 10.04 server edition. I have installed Apache2 from the repos and it is up and running fine - locally. That is the problem, I can't access the server from other machines on the LAN. Ping works, btw. So I've been reading tutorials and howtos for the past week, but for the life of me, I can't find what I'm doing wrong. The standard Apache setup seems to be made to "just work", so although I've looked at the various configuration files mentioned in the tutorials, I haven't actually changed anything.
I'm trying to setup an Apache webserver on my computer in order to practice HTML5/CSS3 for an upcoming competition I'm in. I'm able to access my site from inside my network, but I cannot outside my network. I've had several people try, and they all report that the server just times out. I'm running Ubuntu 10.04 and Apache 2.2.17
Ok i want to access my server from internet, ok I have checked with my mobile internet provider (3) they say they dont block any ports, is it just a case of letting the firewall let external access to pc? ( no router just mobile dongle)And do i just ssh into external ip on ssh port
Is there a safe way for me to configure my server for access from any internet connection as well as from my home/office LAN? I'd like to be able to access file shares, webmin, the router console behind my Gateway for maintenance purposes. Access to Server Desktop itself would be a bonus.
I am using a CentOS 5.4 server for Snort (it's actually using the easyIDS config). I'm trying to modify some things, and I've noticed that I can't seem to download any files. WGET, FTP, etc... all just time out. It's not a network firewall issue, as I've been monitoring the logs and see no blocked traffic, and other machines on the subnet can get outside with no problems. I checked the Cent firewall using the setup command, and it says it is disabled as well. I'm very new to linux, so I'm wondering how I can troubleshoot this? The wget and ftp errors just say the connection timed out, but I'm not sure why.
I have a server that was set up by a friend so I have a location to save all my documents for work in a RAID array.
It is on a static IP address, I can ping the hub and other computers on the network absolutely fine.
I can't connect to the internet, the router in question is a Netgear CG3101D. Logging into the router I can see that the server is a trusted device and in all the parameters are the same as other computer running Ubuntu Studio.
Does anyone have any tips of how I can find out how to find what is wrong?
First off - I'm running 11.04 as a VPS on an OpenVZ.
I've set up a very basic environment of LAMP + mail server. Installing apache, php, mysql and proftpd all went fine, but whenever I install iRedMail (which is postfix + dovecot + clamav and few more packages) I lose the internet connection.
Although the server is easily reachable by ssh and http, I can't connect anywhere from the server. At first I thought it only had issues with resolving domain names, but now I see it doesn't even ping IPs as well.
Code: # ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1
I'm setting up an old box as a dedicated file-sharing server on the LAN as well as an internet web server for my personal web site but I have no network connection. My computer is connected to a router which is connected to my DSL modem. The router has the Ubuntu box's MAC address as well as a Win7 box, which connects to the internet fine.
Here is what I've tried:
1. Check routing table
Code:
2. Try to add a default gateway to the internet on eth2, this happens:
Code:
3. I edited resolv.conf, which was empty, adding:
4. I edited /etc/network/interfaces as follows:
Code:
Then I type the following:
Code:
And it keeps doing this endlessly because it's not finding the DHCP server, presumably...? This didn't solve the problem and so I attempted another configuration:
Code:
Still, not internet connection and no ability to apt-get anything (says packages not found)
So, this didn't work either. What I've tried should work, especially the route command. Now why won't it work?
I am having a server in a corporate data centre. There are some virtual machines running on it.The main server is accessible from internet via SSH. There are some people who within the lan access the virtual machines whose IPs on LAN are
from internet only one host is allowed SSH. This machine has public IP and is also connected to LAN on the IP 192.168.1.50. Tunnel is not allowed on our network.So now I am came across a solution as explained on this link. I am not clear with on which machine .ssh/config file I add following
Code: Host securehost.example.com ProxyCommand ssh user1@insidemachine.com nc %h %p Should above be done on gateway where public IP and ssh is allowed or client on internet who has to login. Do I need to create separate accounts on the gateway also so that the users who can SSH to gateway then are forwarded to inside machines? Or one account on gateway is sufficient for different people logging in via internet to my gateway and then forwarded to internal machine?
Then do I need to create an account user1 on the gateway also?
1) What is the correct syntax for ProxyCommand on gateway's .ssh/config should I use
Code: ProxyCommand ssh user1@inside.machine nc %h %p or I should use Code: ProxyCommand ssh user1@gateway.com in nc %h %p
2) Should I create new user accounts on gateway also which exist on internal machine?
Centos 5.4 distro using on remote machine. I have remote site where internet access given via squid proxy. So when we enter in browser it start working internet fine. But on command line (bash shell prompt terminal) like wget, ping, nslookup, traceroute etc., these commands does not work.
I've a SQUID proxy server installed in SUSE 9.0 ES server. I've created cache dirs on seperate partitions for better caching. Its working fine. But since last 15-20 days, i've experienced very slow net access to clients. I've gone through the /var/log/messeges file, it generates a two line error messeges
This messege increases as the number of clients increates (for internet access). The apperance of error messege lowering down as soon as the number of clients reduces.
As the count of clients increases error messeges increases, internet access getting slower and slower.
I have configured squid server and it is working fine. I want that only specific ip addresses in my LAN should be able to access internet and for that I have given these entries in access control lists in squid.conf file:
acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 acl Safe_ports port 80 acl Safe_ports port 21 acl Safe_ports port 443 acl Safe_ports port 70 acl Safe_ports port 210 acl Safe_ports port 1025-65535 acl Safe_ports port 280 acl Safe_ports port 488 acl Safe_ports port 591 acl Safe_ports port 777 acl CONNECT method CONNECT acl QUERY urlpath_regex cgi-bin ? acl apache rep_header Server ^Apache acl our_networks src 192.168.0.181/255.255.255.0 192.168.0.182/255.255.255.0
And in http access I have given this: http_access allow our_networks http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access deny all
In this I want that only 192.168.0.181 and .182 should be able to access internet but Now the problem is that all the IPs in the LAN like 192.168.0.20 are also able to access internet. What changes I need to do to allow access to specific IP addresses. I am not using any firewall or iptables entries and i am manually changing in the firefox at client side to access internet.
Am using Suse 10.2 for internet and e-mail server. currently all my users have access to the internet if they know how to setup their web browsers. how do i deny some users internet access so that a user can only access his/her e-mail but not internet.
We have Apache installed on CentOS 5.3 in our laboratory. Indeed the server is running fine for almost two years since it is actually the first CentOS 5 that was released just regularly updated. Now, most of our applications are custom made PHP applications and until now we somehow managed to avoid using PHP to fetch files that are on the internet itself. But now we are desperate because we need to allow PHP to fetch files through Apache but it seems as if Apache is not allowed to make a connection to the outer world. Additionally we use a proxy server to connect to the outer world so right at the beginning http_proxy is used to set that environmental variable. And for the root user it all works fine after that but it seems as if the apache user is not allowed to access the internet. Just to make a remark our web server can be accessed from the outer world so its a one way street for now.
i want to buy a pci weasel, people told me i will have access to the server over internet with this board.Does anyone have any experience with this kind of card?How can i have access to server over internet to the bios,etc.
I am getting an access denied when trying to log in via SSH to my home server with putty(windows) over the internet. I can use any user including root and get the same result. If I use my Android phone with the ssh terminal command I am able to successfully log in and use the server.
i have successfully setup PPTPD on my server and I can open a VPN tunnel but my clients can only ping the server's IP, they don't have access to the internet through the VPN.
i have searched different forums and understand that I have to create a route on the server to route packets between the VPN interface and my internet gateway, but I didn't manage to get this work.
here is what my setup looks like:
Code: root@r31495:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:1c:c0:c7:13:35 inet addr:94.23.197.XX Bcast:94.23.197.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Ubuntu lts server distro using on remote machine. i have remote site where internet access given via squid proxy. so when we enter in browser it start working internet fine. but on command line(bash shell prompt terminal) like wget,ping,nslookup,traceroute etc these commands does not work.
I'm getting my first web server configured, and as per a tutorial I found, I used shorewall. However, it blocks all internet access (even from apt) to my server! Does anyone know a decent firewall program or a good guide on configuring shorewall?
I'm running an own PPTP Server, but I can't get it to access the internet. All my PCs at home run in the 192.168.0.0/24 net, the PPTP Server has local IP192.168.0.5 and remote IP 192.168.0.80-99. The router to the internet is at 192.168.0.1, and the IP of eth0 on the machine where the pptpd runs is 192.168.0.4. I want to be able to connect to the internet trough that VPN and access my local LAN servers (which works fine so far). I can ping internet and local IPs successfully, but can not access them with a browser, or connect to them in any other way. I have 'accepted' all in/output and forwards.
I am running a Squid proxy on the same machine, and if I do: iptables -t nat -A PREROUTING -j REDIRECT -i ppp0 -s 192.168.0.0/24 -p tcp --dport 80 --to-port 3128 I can access the internet through Squid, but of course Jabber/ICQ etc. Won't work then because it just refers port 80. But I want the PPTP Clients to connect to the internet directly, if I don't use that rule it's not possible to load any pages. But pinging works all the time. DNS is also working fine, but I can't even access webpages via IP directly. How can I allow the PPTP IPs 192.168.0.80-99 to get direct access to the Internet with Iptables?