Ubuntu Servers :: FTP Breaks With Outbound Connection
Jul 24, 2011
I'm sending files to a remote server by way of FTP via a PHP script. With the firewall turned on these files are getting to the remote server with 0kb and the remote server is timing out before all the files are received. When the firewall is turned off the all files are received in tact. There are no outbound rules set in the iptables, looking for ideas on what to check next.
I'm having an issue where a server in CA (1000/full) and in VA (100/full) have very lopsided data transfer.
CA -> VA with iperf shows ~20Mbps VA -> CA with iperf shows ~93Mbps
If we change the CA server to 100/FULL, transfer speed is 93Mbps both ways.
Some tuning was done to TCP window scaling parameters, but it won't correct the issue, just improve the CA -> VA numbers to what is listed above. I will say, turning TCP window scaling OFF will lower the transfer speed both ways to < 20Mbps.
The only clue I have when looking at wireshark dumps is that the window scale going OUT would never go past 10240 (scale is 8, so 2^8 x 40bytes). In the opposite direction, the window size will go above 3MB (scaled).
It is not a bandwidth problem as iperf with UDP shows 93Mbps both ways. Local transfers (CA 1000/full to CA 100/full) show full speed both ways, so I feel it is strictly related to TCP window scaling.
RedHat 5 64-bit on both sides. Any ideas why it won't scale above 10240?
I've ben punding myhead on this issue. I've setup a new postifx server on rhel5. After editing the needed entries, i can't seem to send any outbound mails to yahoo or any other domains.My postconf -n is as follows:
I have just built an internal postfix server for sending mail only, it's not accessible outside our network. I will be sending from our domain, Rewriting the from field to abc.com is turned on in the postfix config. A friend is telling me this will not work as they will do reverse lookups on our domain. What does this mean? Obviously the domain the email is sent from is a valid domain. If they do a lookup from the IP the mail came from it would be global crossing, our internet provider? These outbound emails are critical client reports, I want to make sure they are not seen as spam.
This is the current setup that we have: We have approx 20 clients who pay us to send out a type of e-mail called an E-Blast to their customers. We currently are using 5 Microsoft Windows Virtual Servers to do this. The problem is that those machines are starting to break down. There are times that it will take Microsoft Windows approx 9-10 hours to complete 1 job. This is way too long. We want to move away from Microsoft Windows for this particular type of job as it seems there are more customers who are wanting to use this type of advertising.
It seems that using a Linux Server "Command Line or Shell" environment would be the best way to go as there is no GUI like Windows. Since there is just text...that is something that would/should process very, very quickly.
I am in the process of setting up a new SMTP outbound mail server. This is the current software & configuration (what is installed on this new machine):
All of the customer data (Names, E-Mail Addresses, etc that these e-mails are going to) are currently loaded in a Microsoft SQL Database.
My machine that I am using is plugged into the DMZ. I have 1 ip address for the 1 network card. I have also added/bound 4 more ip addresses to that network card.
I have configured Postfix for Multiple IP Addresses.
I can, from the command line, send successful test e-mails and receive them in my personal account.
As far as I know everything is setup correctly. I can and will post requested information so that it can be verified that everything is setup correctly.
Here are a couple of my questions:
Ensure that I have my Network / Interfaces file and my Postfix's Master.cf/Main.cf files setup correctly?
How can I setup this server to be an Outbound SMTP server and get it to use all 5 of the IP Addresses to send these e-mails quickly?
What can I use to check and ensure that this server is in fact sending out emails on all 5 IP
Addresses (I heard that there is a program named "Postal" that may help in determing this).
I ve been a loyal fan of RedHat Linux for last 5 years.I switched to Ubuntu just few days back.Let me tell you, I ve already started loving it.However I m facing few issues while downloading from the Net.While downloading some softwares from the net using 'Package Manager' or 'BitTorrent Client' or directly through Firefox, I m getting the following error if my computer remains Idle for around 20-30 minutes.Error Says, Check your internet Connectivity'.I then need to Redial to connect to Internet.I never faced this issue in RedHat or Windows.I use to put my machine to download for the entire night and never faced connectivity problem.Where as if the computer is not idle and I m hitting keyboard keys, the download happens smoothly.It seems like some thread is checking my status. and if my status is Idle its disconnecting the net
I am mounting a remote directory using sshfs, over VPN. If the VPN connection is lost, the directory obviously can't be read. But, when I try to "ls" in its parent directory, the command just stalls. No error messages, and ctrl-d, ctrl-c, ctrl-z don't do anything. The command I ran to mount the directory was: Code: sshfs -o workaround=rename bt@example.com:/dir1 /dir1
I was browsing the available packages to install the weather indicator and noticed that a network indicator was available. apt-get said that it needed to install 'connman' and remove 'network-manager' and 'network-manager-gnome' in order to install the network indicator. I foolishly assumed it knew what it was doing so I went ahead and performed the installation/removal and restarted the computer.
Now I cannot connect to the internet. I only have an ethernet card for connection to the internet, no wireless or otherwise. The configuration I use is not DHCP, but a manual settings of address/subnet/gateway and name servers
When I try to set the network values through the network indicator some of them do not stay set. The gateway value appears to be set, but then after closing down the indicator window completely it comes up with the value 'Modified'. The DNS servers is the only other value that does not stay set, but it only changes to be blank. The connection never works.
So, I downloaded 'network-manager' and 'network-manager-gnome' on another computer, transferred the packages by USB, and reinstalled them (uninstalling connman and the network indicator). But the network connection remains broken. At least with 'network-manager-gnome' the values I set stay set, they just don't work anymore. I've tried editing /etc/network/interfaces to previously working values directly, but they do not seem to have any effect either. The connection no longer works.
I really do not care if I use the network indicator or not at this point, I simply want the network connection to work properly.
Edit: I realize I forgot to say that I'm using Ubuntu 11.04 (upgraded from 10.10). The network card is the "Integrated Broadcom 57780 Gigabit Ethernet controller".
I have installed Ubuntu 64 10.04 server. I have two nics and have set them up to both be static with their own IP with the correct gateway, network, broadcast, subnet and dns-nameserver. When I have both enabled, I can ping local pc's but I can't ping Internet sites like Google nor can I get out to the Internet with apt-get or Lynx.
If I disable one, then I am able to get out to the Internet. All my configs look good, and it does not matter which one I disable, just so long as there is only one NIC on, everything is good.
Latest FC12 update included dhclient-4.1.1-9.fc12 which fails to configure eth0 after restarting the computer. Had to remove/downgrade to 4.1.0* (and reinstall dracut and NetworkManager(-glib)). There is no config file in /etc/dhcp/dhclient.d so I don't know how to reset dhcp/eth0 to connect. Thought some one smarter than me might have noticed this by now and worked out a solution. Can't tell that anything in Bugzilla sounds like my problem. Might only be x86_64 or KDE-4.4 problem as my 32-bit Gnome install is not affected. Easy enough to grab the rpms, just annoying. Still haven't learned how to keep a package at version x.x with Fedora yet but that's another topic.
I've got a strange problem. I have the following system:
[Code]...
After doing this install everything works fine as expected. I can reboot, shutdown and bootup as I much as I want to and the system will work. Now, I proceed to do the following (as root obviously - sudo bash)
[Code]...
When I try to restart the system now, I get to the grub boot loader and then it just breaks with the following message I've identified 'mdadm' as being the culprit here. Any idea why this would happen? Just a subnote. The reason I'm installing mdadm is to create a soft-raid as follows with the remaining space on each drive:
I have recently upgraded to 11.4 and also had to renew an AP (HP ProCurve 10ag). This was several years old but replaced an identical model which had been working well until recently. I thought it would be appropriate to flash newer firmware as the device four releases behind the times. The access point serves partner's XP laptop and my openSUSE 11.4 laptop and after flashing the firmware both machines would make a connection and then break and remake, to the extent that the openSUSE machine became unusable. I thought it was an encryption problem and spent hours changing the AP and Client setups to no avail
After reading a thread here I checked out dmesg and this is what I found:- [78.639247] wlan0: authenticate with 00:1d:b3:4b:ae:14 (try 1) [78.640804] wlan0: authenticated [78.642507] wlan0: associate with 00:1d:b3:4b:ae:14 (try 1) [78.651195] wlan0: RX AssocResp from 00:1d:b3:4b:ae:14 (capab=0x411 status=0 aid=1) [78.651200] wlan0: associated [78.652222] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [78.652283] cfg80211: Calling CRDA for country: DE [78.674378] cfg80211: Regulatory domain changed to country: DE ..... This went on and on.
I updated my server (Ubuntu 10.04 LTS) today and had about 23 updates in Webmin. After the update suddenly apache doesn't render php pages anymore, instead letting you download the source code for the scripts. I checked the logs and found this:
Code: Syntax error on line 1 of /etc/apache2/mods-enabled/php5.load: Cannot load /usr/lib/apache2/modules/libphp5.so into server: /usr/lib/apache2/modules/libphp5.so: cannot open shared object file: No such file or directory I tried reinstalling php5 but I can't get it to work. I also hope that this was a glitch and that updates that break stuff like this don't get through QA.
Alright, so I upgraded my old fedora 10 server to Fedora 11 with a netinstall CD, but now service httpd restart is broken. I already had to delete the old config file and reinstall apache, but now I can't restart it. I can kill it and then start it manually, but it never stops the running instance and fails to bind the port. I know it's because /var/run/httpd/htpd.pid is in the wrong place, which would be /var/run/httpd.pid, but do I have to make a symlink every time I want to restart it? I edited all the config files to point to the right place, but the system does not honor them. What do I do?
Since the upgrade from Lenny to Squeeze on my Notebook Toshiba Satellite Pro U200 with Intel Pro/Wireless 3945 ABG I have wireless connection problems.The connection breaks time to time and sometimes cannot connect automaticaly after restart. BTW I didn't change anything on the wireless or network configurations on the notebook and on the wiereless router.
I am running an Ubuntu Server on a VirtualBox VM running on my windows machine. So I've created a self-signed certificate using the following tutorial: [URL]
From this tutorial I'm left with 3 files: server.key server.csr server.crt
Then I found this very similar tutorial that has an extra bit on installing the certificates in apache: [URL] So I followed it's instructions which boil down to this:
[Code]...
So I'm thinking this should work now. However in Chrome I get: SSL connection error Unable to make a secure connection to the server. This may be a problem with the server, or it may be requiring a client authentication certificate that you don't have. Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. IE8 gives me a typical "Internet Explorer cannot display the webpage" Note that [URL] fails while [URL] works fine, so it's definitely something in my ssl setup I'm thinking.
I wish to prevent some programs from "phoning home", and to allow other programs to access only specific web servers.Is there any way to interactively allow or decline outbound communication from individual programs on Ubuntu?
Looking at the output of netstat, I'm not seeing a definitive way to tell which torrent connections are clients reaching in to my machine vs my machine reaching out to the world. Is there a clear way to determine which is which?
I have internal-only email server that has internal BIND9 running. Thought it only has its IP address defined in /etc/resolv.conf, it is still resolving outside addresses.
I understand the difference between Reject vs Drop for incoming traffic, but are there any differences between reject and drop for Outbound Traffic? Are there reasons to pick one over the other or are they functionally identical when talking about Outbound traffic?
What should I do to keep important files on my computer from being uploaded to the internet? Don't I need an outbound firewall to prevent this?
What causes my computer to send an outbound request to the internet that would result in files being uploaded from my computer onto the internet? I'm afraid to put anything of importance (like reports that I've written for work) onto a computer with internet access because I don't want them to be uploaded to the internet. I wouldn't upload them on purpose obviously, but I'm afraid it would happen without my knowledge because I don't know what I'm doing.
(centos 5.5 86*64 with cpanel) I am trying to set up a php script.
The script requires an outbound connection to project honeypot and when I go to the honeypot.php on my server I get an error asking if outbound connections are disabled.
They could be...I am not sure where to check, I have checked csf and outbound tcp is allowed on port 80, but I am not sure if I should be looking somewhere else.
Obviously I dont want to make the server insecure, so I am wondering how I can allow this outbound connection.
Using Windows, I always set a Restrictive firewall policy with a third party firewall. But I also had all ports set to Stealth, something that appears to not offer any security benefits (as I've learned from reading Ubuntu forums). I'd like to learn about best security practices (under Ubuntu) for outgoing firewall protection. I will be using the built-in Ubuntu firewall that is configured via Firestarter. Outgoing filtering offers privacy as well as security benefits. But I thought I needed my ports stealthed to be safe too, so I'm open to learning new things.
I wanted to start a poll to find out how many folks use permissive/restrictive, but no polls allowed here apparently.Could Ubuntu users knowledgeable about firewalls enlighten me on whether I should go Outbound-Restrictive and what applications I will need to allow so Ubuntu "housekeeping" is not affected negatively? I basically just use the internet for software updates, web-surfing and e-mail. One question I have is whether there is something comparable in Ubuntu to Window's "DNS Client" service? I always disabled Window's "DNS Client" and forced each application to request port 53 DNS lookups itself.I only had to allow four programs to accomplish all internet traffic that I engage in. I set all other programs/applications to be either Blocked or to have to Ask for an outgoing connection as needed.Here is my former Windows XP setup:
svchost.exe: allow UDP for ports 53, 67, 68, 123 (time) and TCP for ports 80, 443 Avast: allow UDP for port 53 and TCP for port 80 firefox: allow UDP for port 53 and TCP for ports 80, 443 IE: allow UDP for port 53 and TCP for ports 80, 443
I'm having a problem that seems to plague a lot of people judging from my research on the web. I have a hosting provider that limits the number of incoming connections to the shared host to 50 per IP.
I have a single IP for outbound connections and I use Squid as a proxy server.
Lately I've tripped across the 50 connection limit frequently - and that's with only 1 user. It seems the problem is related to the performance you can get out of a desktop these days. Its not impossible to have several browsers open with several connections to different sites on the same server - and boom - locked out!
So it occurred to me that there must be some way to limit the number of outbound connections in the kernel - but I've not found it. I did find that Microsoft had been limiting the number of outbound connections in XP to 10 to address the virus problem, and I've found countless hosting complaints and dialog on the subject with no easy solution.
So my question is simply, does anyone know how to limit the number of OUTBOUND connections to a single IP in the kernel?
Is there a way to configure my interface to promisc mode and also make it not capture the "transmitted" packets. ?I mean, i want the interface in Promisc mode but only for inbound traffic.If there isnt any using ifconfig, can it be by configuring eth0 to promisc using ifconfig , and filtering outbound traffic from being captured using sockets or something ?
My setup is local install so I don't expect it to receive emails from the internet.However I do expect it to be able send messages to the internet, but it doesn't seem like it. I have tried setting up on FreeBSD before and it was able to do so but I wasn't involved in the setting of the machine though. I was just tasked to setup Horde
I want to prevent code from making http connections to other, specific hosts. My understanding is this can be done in /etc/hosts.deny. What would that look like?