General :: Configure Iptables For Only HTTP And HTTPS Traffic
Aug 11, 2011
I am trying to configure iptables for only HTTP and HTTPS traffic. I start by blocking all traffic, which works, via:
Code:
iptables -F
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP
I then try to allow HTTP and HTTPS on eth0 with these commands, which does not work:
Code:
iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
Code:
iptables -A INPUT -i eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT After these commands I should be able to access the internet. Does anyone know why this is not working?
I need to redirect all http/https/ftp traffic through the remote proxy, but when I changes connection settings in browser or in System->Preferences->Network Proxy it doesn't work well: instead of getting page content browser asks for saving some short (8 bytes) file with the same content for all requested pages. It happens in Chrome/Opera/Firefox. This proxy requires authorization and works on computer with Windos XP. It worked well when I was using Windows 7 and Proxifier, now I have Ubuntu 9.10 with all available updates.
I'm looking to use Linux (Ubuntu 9.10) as a network bridge between two subnets. I can configure iptables to permit all traffic on eth0 (subnet 1) to pass to eth1 (subnet 2) but before transmitting that traffic I want to perform further analysis. Is it possible within iptables or via a third-party product such a pyroman, to write a "hook" that then directs that traffic to another application installed on the same host?
I had setup an SSL secure server awhile back, such that: [url] works but [url]does not (note the different: in the first, I use HTTPS, whereas the second I use HTTP) How can I get both to co-exist?
i used the angry ip scan software and found alot of the public ip addresses on our network are accessable from outside when they are not suppose to, For eg printers/ pcs etc. to make a start on locking down the network i was wondering if anybody knew th iptables command to add a rule which blocked all incoming traffic to specific ip adresses on the network and to a range of ip addresses.
I need to set up my centOS computer as a firewall in my home network. Ive got 2 interfaces, eth0 and eth1. I want to allow and forward all traffic on eth0 and block all traffic on eth1 except ssh, ping(icmp) and DNS. How do I do this? Ive tried some editing in /etc/sysconfig/iptables but no luck.
I have my Linux laptop running Katatonic Koala at the moment. It is connected via CAT5 to a switch. The switch then connects to my router. All five of my computers are connected to the switch, actually. The only one that won't talk to any sites other than https secure sites is the Linux box. I am not well-versed in the inner workings of Linux and need some help in what I need to do so that regular http sites work. You guys always have the right anwers so I will wait humbly for your replies.
I have a server (Fedora 12) setup at a client's datacenter and the network is setup to allow me ssh access into the server, but prevents me from opening any connections from the server. However, I need to make http and https request from the server. What I'd like to do is forward all http/https traffic through another machine outside the network.
I've been looking at the documentation for ssh and the various options there and have gotten so far as to enable initiating an ssh connection from the client network back to my machine, but am not sure where to take it from there.
Here are some of the commands I've used so far:
Code:
I'm attempting to bind port 80 to be forwarded through the local machine. I assume I use "ssh -R" to create a dynamic tunnel to forward requests but I must be missing something.
I am working on a project to create a video conferencing environment. For this I use a default installation of BigBlueButton on ubuntu 10.04. One of the main problems here is that it's not safe enough to share classified documents trough this software. It's a simple webserver that uses nginx. What I want to do is make this connection secure.
One of the problems is that I don't only have a connection trough port 80 but it uses the following ports: Port 80 (HTTP), 1935 (RTMP), 9123 (Desktop sharing). I would like to use a proxy instead of some tunneling or vpn to do this. Would anyone happen to know anything about squid or another equivalent to do this?
Im running a web server on port80, but i want traffic coming from ip 212.333.111.222 on port 80 to be fowarded to port 9020 on the same server that my web server is rinning at that is my sshd port
For some years now I have been able to use openssl (apache-mod_ssl) to process encrypted traffic because I had, in effect, only one host - the main server - as the sole entry in our ssl_vhost.conf file.
Now we are working toward serving a couple of more secure sites for closely related organizations, but with their own distinct identities. This, in the past, would have meant additional static IPs with matching nic cards for starters. But my understanding is that since 2007/8 we have been able to use gnutls (apache-mod_gnutls) which gets around the old problem of Apache not being able to direct name-based traffic because that would not yet have been decoded. This is referred to as SNI - Server Name Indication.
Here my confusion begins. Is there an overlap between SSL and TLS? For instance, I would have generated RSA keys and a self-signed certificate with the genrsa command. Is this sufficient for gnutls or does it need to generate its own keys and certificates? I realize gnutls is relatively a new kid on the block but it is appealing and I'd like to give it a try.
I am working with the Mandriva/Mageia cooker with an x86_64 architecture so all packages are up-to-the-minute.
When I click on an http link in Thunderbird, nothing at all happens. There's no error message on the console, there's no new browser starting, and there's no new tab opening with the browser already running.
I've tried: sudo update-alternatives --config x-www-browser, choosing firefox.I've also tried: adding a new string value to Thunderbird's warranty-voiding config: network.protocol-handler.app.http, with a value of /usr/bin/firefox. This was recommended in various threads.But no luck.There's an entry for https - firefox on the Attachment tab of the preferences, and https links are indeed opening in Firefox. But not http.KDE 4.4.2 itself has default mail client and browser set to Thunderbird 3.0.4 and Firefox 3.6.3 (both from the repositories, no website downloads).
What is the best way to go about setting up multiple virtual hosts on the same box, one using http and one using https/ssl? I'd like to serve them from the same ip address if possible; I know it's possible in apache 1.3.
For one project I use a web hosting service. I wanted the entire site to be https, so I bought a service from them in which they automatically install a trusted cert so people can access the site through https protocol. Since http is still available, though, I need to do automatic rewrites or something to change http into https requests. (I don't have access to their Apache server configuration files or anything like that.)I found on the net this code to add to my .htaccess file:
Code: RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
Is there a driver or utility to mount an iso image over http or https? There is a httpDisk driver for windows but I can't find anything similar for Linux.
There may be a way of using curl/wget but not sure if this is possible.
How to best manage both http and https pages on the same apache-server without conflicts. For example, if i have both 000-default.conf and 000-default-ssl.conf pointing to mydomain.com, and don't want users who visit mydomain.com without specifically type the https-prefix to be redirected to the https-page - how to handle users using browserplugins such as https-everywhere etc?
Another option would be to create a subdomain ssl.mudomain.com and have users who want to reach the ssl site to have to type ssl. I have tested several things with https everywhere enabled in my own browser, and it seems really hard to make this working the way i want, in one way or another i always end up getting redirected to the ssl-site automatically.
The reason i need this to work is because i run one site that i don't care much about SSL, that is the "official" part of that site, and i also host some things for friends and family on the SSL-part. This would not have been a problem if it wasn't that i use self-signed certificates for my ssl-site and the major user become afraid when a certificate-warning pops up in their browser and therefor leave the site.
I have 2 web server in my office : http and https. You will find attached the httpd.conf and ssl.conf. I can acces the https server from home, but not the http one.
What I did : configure the router to forward port 80 to my fedora 11 machine open port 80 with system-config-network created a virtualhost
The same exact steps have been done for port 443
I can access both server locally but only the https server remotelly.
Here are my iptables :
Code:
you can try to acces my servers using [url]
I made httpd to listen to port 8080, and done all the port forwarding/opening stuf, and it works. so is it a bug ?
Finally found my error seams like turning off UseCanonicalName to off did the trick
I really think it's a bug now. It was definitively working last week, I just added content to the main host of my website, and now i can't acces it from port 80. If someone think it's not a bug or find someting missing/wrong in my conf file.
I've noticed that when I open firefox I get really strange HTTP and HTTPS connections showing up in firestarter (which as I understand it is just a GUI for IPtables). They connect to various bits of a site listed as 1e100.net (when you use "lookup hostnames") such as wy-in-f18.1e100.net, they stay connected all the time as far as I can see unless I close firefox. I've heard people say they are connected to Google, but I can close all tabs after loging out of google and still see them... it's very odd.
I was using LDAP authentication for my http shares access. But I screwed but the LDAP server and now I wanted to configure my http shares against NIS authentication.
I followed the procedure to setup NIS client, added:
Though I am able to ssh to my machine using NIS user, the http is still not authenticating the users
It fails with the following message in /var/log/secure
httpd: pam_unix(httpd:auth): authentication failure; logname= uid=48 euid=48 tty= ruser= rhost= user=nisuser 48 is the default apache user in my /etc/passwd
I don't have any pam_ldap.so in my httpd pam file since I had to remove LDAP configurations and I can't switch back to LDAP.
I have tried to configure my iptables to allow only HTTPS connections to the internet. Unfortunately, I didn't get that to work. I configured it like this:
Quote:
iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -t filter -p tcp --dport 53 -j ACCEPT iptables -A OUTPUT -t filter -p udp --dport 53 -j ACCEPT
[Code]....
Of course I am only trying to access websites via HTTPS Still, I was wondering if HTTPS somehow under the hood requires the HTTP port to be open or if my rules are in some other way wrong.
I connect to the internet at work through an authenticating proxy, and to avoid having to enter the proxy info into every app I use (e.g. firefox, wget, kde, etc) I have set up squid as a local transparent proxy which authenticates and routes all traffic to the work proxy. It has been working fine, but lately I haven't been able to connect to any https sites. I don't think I have changed the configuration, so perhaps it is the result of an upgrade, or something badly configured on my system from the start. I have tried connecting to https sites without squid and iptables and it works fine. My system is Arch linux, and my squid.conf file is: Code:
acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 acl Safe_ports port 80# http acl Safe_ports port 21# ftp acl Safe_ports port 443# https [Code]....
I have nessus application is running in the target machine and the url
is https://hostname:8834/ - which is not accessible
But when i login in the target machine via ssh and check that this application and the service is running fine So i think it is blocked by the iptables in the same machine, where the nessus is running