Ubuntu Servers :: Slow Communication With Server Using SSH?
Feb 15, 2010
Today I installed Ubuntu server 9.10 on my old sony laptop with 512MB of RAM and 2.4GH celeron CPU. I hooked up my server to the router(D-Link) using cable. Now I try to upload or download data(mostly my music) to the server the speed is no higher than 500-700 KB/s. Although at first it used to be around 1.2 MB which is still considered to be low, but is there anyone who knows what I should do? By the way, I use SSH on Ubuntu 9.10 to connect to my server.
Im running Ubuntu 10.04, currently living out of the country from where my server is located. We have a 4 Mbps dsl connection that speedtests fine to the closest server, and if we test from our present location to the speedtest.net server closest to our server the speed is great as well. When I SSH in to my server and setup a socks 5 proxy the speed goes through the floor. We barely get .2 Mbs. Whether I use putty or terminal from our mac, they both get the same slow speed. I've searched the forums and google extensively, but I haven't been able to come up with anything yet.
It stores all my important stuff, as well as some music and movies.I use a second linux box in my living room to "stream" content via NFS or SAMBA share.The streaming tends to stop several times during playback, and needs to fill up its buffer again before continuing to play.I also have some Windows XP and 7 based computers that connect to this file server.I have noticed that directory listing is VERY slow, and there is a huge lag when I want to save/read a file from/to my home directory.
This is my setup:Ubuntu Server 10.10 64 bit (I have the same problem with 32bit ubuntu) 3 RAID5 arrays with 4 hard drives eachLVM on top of the 3 raid5 arrays.The Logical Volume i use is about 6.5TB, and I use the ReiserFS file systemThis LVM has grown over the years, and has had som replaced disks. So I have used the pvmove, and extend commands a bit.I have tried using IOTop and top to check if there is not enough resources available, but that doens't seem to be the problem.I haven't been able to find out why streaming over the network stops, but I know it is the server that causes the problem.Does ReiserFS have any performance problems with large logical volumes? Would changing to EXT4 or some other FS give any performance gain?
I have an embedded device which acts as a client, and a server at office. The IP address of the client is allocated by DHCP, so its IP address is actually variable. I would like to know what is the simpliest way to maintain the communication between client and server when DHCP is enabled. I once used socket many years ago to communicate between the client and the server. If I remember it right, I actually bypassed the dynamic IP issues by using the computer name to replace the IP address. Even the client is not at the same local network as the server, the scheme still worked. Correct me if my memory cheated me. if socket is still the best solution for this kind of application involving DHCP? I also heard from someone that it is necessary to implement multicast discovery protocols named BONJOUR, but I don't think it is necessary
Has anybody tried to get the USB server 'LANSER-E1' from 'MRT Communication' working in Linux ? It works OK in Windows but I have failed to get it working in Debian Sid.
The Server in the above diagram can be accessed by Client3 and Client4 but not at all by Client1 or Client2. Router0 specifies the Server as a DMZ Host. I would be more specific but this is not my server. I don't use a DMZ, I forward ports when they are needed. In this case I represent ISP1 and the server belongs to a befuddled client. Client1 & Client2 can send packets to each other, no problem. Could the DMZ be breaking communication between the Server and Clients 1 & 2?
I was going to upgrade apache from version 1.3 to version 2.2.15 on a test server (and later on prod servers).. i prepared really good for it by reading upgrading information, changes from 1.x - 2.x, read module documentation to check if any of the used modules had any different syntax and if it was supported for this new version of apache. Later i finally installed apache, upgraded all the used modules to be apache 2 compatible and everything seemed to go smooth. I started apache and resin and everything looked fine, the syntax was OK and everything so i reported back that it was ready.. but i got a feedback that the people that were going to use the server only got a proxy error or service unavailable error when apache was trying to send requests back to the backend resin server.. and this is the problem. I could just have told you that this was the problem, but i find it to be much easier if you have a little background information first to explain how the problem aried.
A couple of other people and myself has done quite extensive troubleshooting on this, we troubleshooted apache, mod_caucho and resin by themselves to check if it was any problems with them.. we could only find that mod_caucho didnt seem to behave correly, the caucho-status page was displaying an empty list of requests and what would be proxied back.. we are running the same version of mod_caucho and resin to be sure it would be compatible with eachother and all the software is running in 64-bit, only a 64-bit CentOS 4.4 OS (yes, the OS is really old but because of complications we are not ready to upgrade it yet).
When we try to telnet to the port where resin is listening its open and resin returns an X when you hit enter (like it should and says in Caucho' official troubleshooting tips). We have tried installing apache 2.2.11 instead of 2.2.15 and other versions of both resin and mod_caucho. Other modules has also been tried like mod_proxy to send the requests back and also the rewrite module with the P flag.. although this was not the preferred way to do it we only wanted to see if it had any difference which it didnt.
So after several hours of troubleshooting we are still not able to find the cause of why we are getting a proxy error (http 502) when apache is trying to send a request back. We have checked all the logs for apache and resin, we have also compiled the mod_caucho module with the "--enable-debug" option to get more verbose information but we didnt get any additional information this way eventhough we should according to Caucho' official site.
We have looked into both the resin and apache config and tried both ways of configuring mod_caucho.. and nothing seems to change the result.. we only get a 502 proxy error. So since it didnt work i downgraded to apache 1.3 one day and that was fine.. then it was upgraded again to apache 2.x to try to find out what the problems were but again we didnt find this.. so i was going to downgrade it again, but this time on a different test server, but after downgrading i for some very weird reason got the same proxy error i had when using apache 2 which is beyond my comprehension.. i just cannot understand this.
- has anyone had any similar experiences with or without the same setup ?
- does anyone know about any solution to this of can provide any additional advice on what to do further? Yes, we want the functionality in apache 2 so we will not settle with apache 1.3.. one of the features being the new filters to be able to capture requests for pre and post processing before they hit other modules.
I have two ubuntu boxes. One is a 9.04 desktop edition and the other is a 9.10 server edition I am working on some code that needs to be highly tolerant of bad network connections. It sends transactions to a central database, but when the network is not available, it caches them locally to retry later.
I have the code working beautifully on my desktop box. but when I test it on this other box (the one running server edition) there is a HUGE DELAY every time it tries and fails to send a transaction to the database when the network is down.
I tested a little further, and I found that if i unplug my network cable and run ping somehost on the desktop, it fails instantly saying "ping: unknown host somehost" But if I unplug the cable on the server box and run the same ping command it lingers for about 40 seconds before the ping fails.
Does anybody have any idea why this might be happening? Is this a 9.04 vs 9.10 difference? Is this a desktop vs server difference? Is there some package I can install, or some config setting I can change that will make the server box insta-fail just like the desktop does?
I'm running a linux cloud server with the following config 1.2ghz Processor allocation 752MB Ram
The site loads slow and clicking a link almost freezes the page for a second. Also, the page loads could be much faster. We've been running mysqltuner and have pretty much optimized all slow queries. Is there anything we can do to fine tune the server for faster and more responsive?
About 3 months ago I upgraded from 8.04 to 10.04 and the experience has since been very problematic in regards to overall performance. The problem is mostly with MySQL, but I have also noticed that the smallest amount of disk IO slows down the system a lot. I was expecting a slight performance improvement before I upgraded, but instead I got a very big opposite. I tried tweaking the MySQL server settings, but the improvement has been minimal.
At this point I am going to have to make a new system with something like Centos (I've heard good things about it's performance). Before I do, I want to give Ubuntu one last chance and ask if anyone knows anything that can be done to fix the performance issues. At the very least go back to something that is comparable to 8.04?
All of my PHP web pages are now loading incredibly slow. I created a simple "Hello World!" script and timed retrieving it from a terminal using wget and it took 3min 9seconds. A wget of the home page of a PHP-based site also took 3min 9seconds. I have a PHP script that I run from the command line that I use to look for malicious FTP attempts and it took - you guessed it - 3min 9seconds. I am running Ubuntu 8.04 LTS and have applied all of the latest updates for that version.
One thing I did notice was a proliferation of apache2 processes. With every request for a page I seem to get 7 or so new apache2 processes.
I have 8.04.3 server-32bit. It is on an HP 2.3ghz Pent 4 with 512 mb of RAM.I had previously installed Ubuntu and Kubuntu in a dual boot with XP (actually had a triple boot with XP, Ubuntu, and a BETA Windows 7) for a while, Grub got messed up so I ended up wiping it and just put my XP back on it. I have been intimidated by the command line and when thinking server, I tried the 30 day deal with windows Home server, its OK but it's $100.00 for the permenant version.
Anyway, I manned up to the challenge, installed Ubuntu server, set up as SSH and Samba. I administer from PuTTy and WEBMIN, I tried TightVNC but the GUI seemed very useless, so on a reinstall I went headless.The problem is I have alot of MP3 files I want to transfer back to the server that I had previously transfer to and back again from the Windows server, it was sceamin' fast compared to this. I have found threads talking about using NTFS, I have the EX3, or whatever the Ubuntu format is. Could that cause slow transfers, from NTSF through Samba to Ubuntu?
then I connect into work through a secure vpn connection that used to be reasonably fast. Sometime in the last year or so it has slowed down dramatically as if they added a one minute delay between pages. Everything works fine otherwise. It's just slower. At work they use all microsoft software, server software, etc. and if I connect in my friend's laptop with Windows 7 it works quite well and the pages load quickly like my Kubuntu laptop used to. What I'm wondering is if this is some kind of compatibility issue between Windows explorer and Firefox browsers or just a sour grapes issue on Microsoft's behalf where they slowed things down on purpose?
I have many openvpn implementations. Every time I use windows shares over openvpn, the speed is no more than 500KB/s, in LAN environment. When I start a copy it reaches 200-300KB/s, when I start second one it reaches 500KB/s. No more is reached after more copies simultaneously. When I use linux to copy files - the first copy reaches 700KB/s, the second copy reaches 2.5MB/s (then the first grows also to 2.5MB/s), the third copy reaches also 2.5MB/s. All of these are copied simultaneously, otherwise when only one is started it sits on 700KB/s. Moreover when 2 of the 3 simultaneous copy processes end, the one left backs at 700KB/s again.
But this is linux. When I use Windows the transfer speed is no more than 400-500KB/s (LAN environment). The OpenVPN server is always ubuntu (any version - I've tried 6.06, 8.04, 10.04).
Tried the OpenVPN client in ubuntu (and the windows machine behind the ubuntu), in windows (directly installed the client on windows) and it is all the same - no more than 500KB/s.
I can not use this because it is so slooow. When only one file is copied at a time it reaches only 200KB/s!!! Searched all the google results - no one have an answer, although there are many people with the same problem.
Now, I am sure that the problem is in Windows, because when I use linux as a server and as a client, the client copies fast. But when I use windows as machine behind the client it copies slow. I don't know... something in the tcp/ip settings in windows or something...
I posted this yesterday, but my post completely disappeared (I looked high and low -- nothing.) I am using Ubuntu Server 10.04, all the latests updates. For an FTP Server, I use ProFTP.
One specific directory, and it's subdirectories on my server will not download at a reasonable rate. They move at about 17-50KBPS. All other folders work fine, at around 1.5-2.5MBPS.
What is going on? I have no idea how to troubleshoot this. The files being transfered are in a directory under home. They should have no permissions issues (I reapplied the permissions I want already), I tried restarting ProFTP, the files vary in sizes (from a few kilobytes to about 120 megabytes). I use Webmin for most web management.
I am not having overload issues with my network card or CPU utilization while downloading these files. They are being accessed from the local network.
This issue is taxing because the files in question are backup files.
I have ubuntu server 10.04 on a server with 2.8ghz 1gb ddr2 with the os on a 2gb cf card attached to the IDE channel and a software raid5 with 4 x 750gb drives. On a samba share using these drives I am only getting around 5 MB/s connected via wireless N at 216mbps and my router and server both having gigabit ports. Is a raid 5 supposed to be that slow? I was seeing speeds of anywhere from 20-50MB/s from other people and am just wondering what i am doing wrong to be so far below that.
I've got Ubuntu Server 10.04 on a fairly beefy box (quad-core xeon 2.67ghz, 2gb ram) Standard mysql-server installed, with many databases.Lately, mysql has been extremely slow and almost non-responsive. Server loads are low.Running mtop reveals many, many processes from the user: debian-sys-maint querying the information_schema table with the exact same query, over and over."Select count(*) from tables where engine = 'innodb'"
This is adversely affecting my database server, and thus my websites which rely on mysql. Every search I've done looking for more information about the debian-sys-maint user shows problems where that users was deleted. The user isn't deleted.
Whenever a client tries to download a file from my server via ftp, SAMBA, Teamspeak 3 File Transfer, etc., they report very slow download speeds, around 3-6 kb/s. If I try a ftp file transfer locally, the upload speeds are normal, but I still experience slow download speeds.
My server is connected to a router, which connects to the internet. All other machines connected to that router can upload and download files at normal speeds. It seems to be a server problem, I just don't know where to start.
I recently built a small server for my dad, to host a business website aswell as manage storage of important documents (raid 1).Yesterday I thought I would try out zentyal. I got it working, mostly. It seemed very useful.However, ANY password authentication; including login, sudo, ssh, was extremely laggy. Were talking a minute after entering the password.I have done
Code: sudo apt-get purge zentyal zentyal-samba And
I've got three disks together on a *home* server that constitute four LVs.The two are the root and swap LVs installed by the Ubuntu 10.04.2 LTS installer on the OS drive (250 GB VG: Beta). The third & fourth LVs I made of two physical volumes (640 GB and 200 GB VG: Data) and mounted each inside /torrent. All are ext4.
I'm migrating lots of large files from /home to inside of /torrent, but I'm seeing EXTREMELY slow speeds (700KB/s).I'll admit this is my first time using LVM, and I tried it only because of the numerous smaller drives I have sitting around that weren't getting used. I didn't expect such a large drop in speed.
Here's a more technical review of the setup:
Code:
me@Beta:~$ sudo pvdisplay [sudo] password for me: --- Physical volume --- PV Name /dev/sdb VG Name Data
I have an Ubuntu 10.04 server here, and this week internet sharing got too slow... i dunno if it is a squid problem... but it's too slow. And when i try to registar a domain for that server, bind gives do response. If i dig my server inside lan, it's working pretty well.
FTP from and from my home computers to 2 remote servers has become really slow over the past month. One of the remote servers I manage and the other one is taken care of by a hosting company, so I am thinking the problem is residing on my end. It doesn't matter if I am downloading 1 file or 10 files, they are all coming in at 9 kb/s which is really slow cause I have a 7 megabit connection. I've tried using multiple computers and still have the same problem. I am using proftp for the ftp server and filezilla for the client.
I have an urgent issue with my apache. Since last night approximately 50% of my vhosts are responding very slowly. That means I see a blank page for 1 minute and then the content comes up real quick.I restored the httpd.conf file but it didn't solve the problem.
I'm on Ubuntu 10.04 and using Postfix 2.7 with Dovecot's SASL. The issue is when sending e-mail it takes a bit of time to send. LIke for it to get connected it takes around 15.20 seconds. How can I reduce this delay so it can be sent faster?
It takes 45min to transfer 10MB from my laptop to my replacement server. It takes 1minute to transfer the same 10MB from my laptop to the old server.
All connections are equal. Both servers are plugged to the same router.
Details: I have decided to migrate away from my Proliant 1600 to a slightly newer less complex piece of hardware.
Both machines are LAMP installs. Both are setup to be maintained headless 99% of the time and gnome is launched from the command line only when it is needed.
The older machine has more things running on it than the replacement. The replacement has nothing running that the older machine does not have.
Old box runs Ubuntu 6.06 but was fully updated a month ago. Replacement box runs Ubuntu 10.10 and was fully updated just last night.
smb.conf was the same on both boxes other than the share locations. Reading trying to fix it myself, I did put some known speedup lines into the new box's smb.conf, but it did not make a noticeable difference.
Hardware: Old box, Proliant 1600 = 1998 small server tech. (weighs 50lbs w/o drives) single 500mhz xeon (upgradable to 2 600mhz, though they are hard to find reasonably priced) 1GB SDRAM with ECC
[Code].....
It does not matter what share/drive/partition I transfer to on either machine. The result is always the same.
On the newer computer CPU usage rarely goes over 50% and it has not had to go into swap at all yet.
im experienceing issues with DANsGuardian speeds being extremely slow. Basically - I have Squid up and running perfectly, when i change the ports from DG to the standard proxy i have full internet speeds, but as soon as i run DANsGuardian, it takes about 1 minute to load a page :/ Any ideas on what could be causing this?
I just set up squid for the first time. I have zero experience with proxy servers. I used this guide:[URL](I also looked at a few other guides such as this one:ttp://ubuntuforums.org/showthread.php?t=320733. However, I wanted to most barebones config to start with and the link I used was the simplest.)So now that I have it set up, I'm testing it with FoxyProxy. It is not working well. I went to speedtest.net. The download and ping tests were just as fast as without the proxy, but the upload test fails completely.Furthermore, many web pages load very slowly or not at all. So performance is very mixed, but mostly poor.
Why would this iptables cause this mail delivery error? I think it's to do with dns lookups not being routed properly... if remove the last rule, mail works fine.
ssh is also very slow to connect when the last rule is enabled.
postfix mail error:
Code: Jan 24 11:32:18 xxxx postfix/smtp[15065]: 9F2162C519: to=<xxxxx@hotmail.com>, relay=none, delay=1005, delays=965/0.01/40/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=hotmail.com type=MX: Host not found, try again) iptables
My first post. I've been using Ubuntu Server edition (Hardy) happily for some time now.
I use sudo regularly during configuration of new services. It always works/authorises within seconds, however, it recently became very slow, to the point of being nearly unusable.
In /var/log/auth.log I noticed a regular working pattern like this code...