A while back, I put a site up under a LAMP setup, and followed a guide from ubuntuforums that I googled to set up SSL encryption for the site.
That site works great, but since then, I've added some other sites to the same LAMP server. They load fine as well, but if I type in https:// before going to the latter sites, the browser attempts to redirect to the first, and warns that it is a fraudulent certificate, and that I'm at risk by going to the site.
Obviously, it isn't an attack site, the certificate is just set up for only one domain. How do I prevent my non-SSL sites from redirecting to the SSL-encrypted site?
It sounds quite strange though..I dont know why Javaranch.com is not opening on my system.I use 9.04 with 3.8.6 firefox...there is no particular error.the site takes too much time to get opened but it does not..and you end up closing the tab with frustration.never had such experience before;with any web-site
I've been on a quest to enable full routing through my openvpn tunnel between my office and the colo. Masquerading will work, however it will throw off anything key based and makes a lot of things just more difficult and vague in general. Is there an easy way to do this via iptables? I tried using quagga hoping it would magically solve my problems, however it does not seem to do my routing for me . I just did a basic static route within zebra...
I have three locations with a central office connected to two remote locations. At the central office I run on a cisco asa 5505 two site to site vpns. The remote end of the first site is a checkpoint firewall , and the remote end of the second site is racoon on debian. Both sites are up and working. However, where at the first site traffic goes both ways, at the second site it only works from the central office to the remote office.
For example, I can ssh from a host in the central office to a host in the first remote site (through checkpoint firewall,) then ssh back from that host at the remote office to any host in the central office. In contrast, after I ssh from a host in the central office to a host in the second remote office (through racoon), I cannot see the central office hosts (ping the ip address of a central office host, ssh, etc. all fail.) The vpn settings at the central office (the cisco asa 5505) are identical. So it seems to me that some routing magic is missing on the host running racoon at the second remote office. Where would such setting reside? racoon config files? iptables?
I have set up certain portions of my web site to be forced https:// How do I force, non https:// protocols. I know this sounds confusing, so let me give you an example.
I am running a Linux firewall (IPcop) to bridge two networks. Hosts on network A have to use a proxy server in order to get online. This server runs a transparent proxy (squid) configured to use the proxy needed to connect to the internet as an upstream proxy, therefore meaning all the hosts on network B can connect to the internet without the user having to configure a proxy address.
The problem is that HTTPS also has to go through the upstream proxy, which I'm told can't be proxied by my server transparently because of security issues. This means that hosts on network B can't currently access HTTPS sites.
I connect to the internet at work through an authenticating proxy, and to avoid having to enter the proxy info into every app I use (e.g. firefox, wget, kde, etc) I have set up squid as a local transparent proxy which authenticates and routes all traffic to the work proxy. It has been working fine, but lately I haven't been able to connect to any https sites. I don't think I have changed the configuration, so perhaps it is the result of an upgrade, or something badly configured on my system from the start. I have tried connecting to https sites without squid and iptables and it works fine. My system is Arch linux, and my squid.conf file is: Code:
acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 acl Safe_ports port 80# http acl Safe_ports port 21# ftp acl Safe_ports port 443# https [Code]....
I am having a problem with HTTPs in a double NAT'd network configuration. The scenario is like this..
[Code]...
Machines on these LANs can talk to each other no problem. There is also a NAT rule configured for traffic going from LAN A via LAN C out to the Internet. The Nokia is also doing NAT'ing. Normal web browsing works fine with this setup, but whenever I try to access HTTPS sites, it just hangs and eventually times out.Packet captures have showed lots of TCP Retransmission messages. If I logon directly to the Linux Router and fire up a browser, I am able to access HTTPS sites without any problems. This appears to be something to do with the traffic being NAT'd twice. Is there a way I can get around this without changing the config of the Nokia?
I have a server (Fedora 12) setup at a client's datacenter and the network is setup to allow me ssh access into the server, but prevents me from opening any connections from the server. However, I need to make http and https request from the server. What I'd like to do is forward all http/https traffic through another machine outside the network.
I've been looking at the documentation for ssh and the various options there and have gotten so far as to enable initiating an ssh connection from the client network back to my machine, but am not sure where to take it from there.
Here are some of the commands I've used so far:
Code:
I'm attempting to bind port 80 to be forwarded through the local machine. I assume I use "ssh -R" to create a dynamic tunnel to forward requests but I must be missing something.
I have to ubuntu machine (9.10 and 10.4) with a openvpn tunnel between them.This is the situation:
Code: NetworkA 192.168.0.0/24 | UbuntuA br0:192.168.0.3 (openvpn bridge between eth0 and tap0)[code].....
UbuntuA has one only interface etho and there are two openvpn instance: one bridge istance with br0 and another instance with tun0. UbuntuA is not the gateway for networkA. UbuntuB is the gateway for NetworkB.I need to comunicate between pc on networkB e those on networkA.This is the "ping situation" (no pc tested has an active firewall):
ubuntuA vs ubuntuB: OK ubuntuB vs ubuntuA: OK pc on NetworkA vs ubuntuA and ubuntuB: OK[code].....
I'm running a squid proxy in my ubuntu server, and I must have mess it up with the squid configuration. Users, cannot, access https pages. Can you tell me what to change in my squid.conf, so, to fix this?
Here is my squid.conf (witch is a friends conf, that i have change for my needs...)
This started yesterday. I haven't made any recent changes. I can't access any pages beginning with https. It's just my computer because my girlfriend's laptop doesn't have any issues. I'm using OpenDNS, but I have been for a long time and this is the first time this has ever happened. I'm not using a router, I connect straight to the modem, which I've already reset.
I have nessus application is running in the target machine and the url
is https://hostname:8834/ - which is not accessible
But when i login in the target machine via ssh and check that this application and the service is running fine So i think it is blocked by the iptables in the same machine, where the nessus is running
I am using Lenovo G550 laptop wid Intel Dual Core 2GHz, 2GB RAM, 250GB HDD, etc. Earlier I had 2 partitions 187GB (Windows 7) and other 33GB of Lenovo drivers. I split 187GB to 143GB (Windows 7) while remaining 44GB for Ubuntu 10.10!
Everything's been working fine except for internet. I am unable to load many https sites like fb, hotmail, etc. Gmail is working absolutely fine.
I did some research on this forum and disabled ipv6! I also checked for firewall and it was disabled. Then I also configured Open DNS and checked if it is working fine. But nothing has helped.
When I connect to these sites without 's' in https (i.e. only http) these sites load fast. I enter my username n password and then I am redirected to a compulsory https site which then takes me to a page like this (shown in thumbnails)! I have tried Chrome n Firefox 3.6 (which have SSL and TLS checked in preferences)! All these sites are working fine on Windows 7. But I don't want to use Windows 7 every now and then because it has become too slow and boring! Please help me with this.
I connect to internet using DSL wired (BSNL Broadband 256Kbps)
I need to redirect all http/https/ftp traffic through the remote proxy, but when I changes connection settings in browser or in System->Preferences->Network Proxy it doesn't work well: instead of getting page content browser asks for saving some short (8 bytes) file with the same content for all requested pages. It happens in Chrome/Opera/Firefox. This proxy requires authorization and works on computer with Windos XP. It worked well when I was using Windows 7 and Proxifier, now I have Ubuntu 9.10 with all available updates.
i am using Fedora 14. Once system get hanged during opening a video file so I had to restart the system by pressing restart button. But after restarting there are few problems appearing like system monitor not opening and Thunder bird opening but not showing any folder including inbox.
---------- Post added at 04:54 AM GMT ---------- Previous post was at 04:42 AM GMT ----------
how do i configure Open VPN to automatically use a certain VPN for ONLY firefox, ONLY on a particular site? I don't want it to be applied system wide and screw up my IM client and all the sites that i have remembered passwords with, only for a site or 2 where there are regional restrictions.
One of the apps I would like to try out is usenext.Selectint the download for the right version (Linux - Suse, Red Hat, Fedora) I get the rpm file.Either opening this straight away or saving then opening it comes back with the following errors:
In the office there is a local network with samba+openldap PDC. The local domain name is company.net. The company desided to create a corporate Website on a remote hosting and desided that the site's domain should be company.net which is same as local network's domain name. So now it is not possible to reach that corporate website from within the company's local network because, as I guess, bind9 which is installed on above menioned PDC looks for company.net on a local webserver. Is there a possibility to let people from this local network browse the remote site?
Here's a strange one. On my internal network, there are two WinBlows boxes, both connected to my Linux Router/Firewall. Yesterday, I needed to download a file from http://www.freebsd.org. The 1st machine I tried, would not connect from FireFox. When I tried, on the 2nd, it connected immediately. So, I did all the usual "stuff" on the 1st one. Re-booted, as it's WinBlows, cleared browser cache, etc. etc. It still wouldn't connect. BTW, connecting to other web sites was no issue. Just FreeBSD.
So, just for grins, I ran a traceroute from both machines, and here's where it gets freaky. Here's the traceroute from the working machine code...