After changing servers and moving things around, I have one system which serves the /home directories, and for some reason, it is extremely slow. I've looked for duplicate IPs, signs of a HW problem, etc. and cannot figure out why the system is running with a high load, high iowaits, and low responsiveness. The servers and clients are slackware 12.2 and 13.1 respectively.
I've noticed recently that a lot of outgoing internet traffic is generated by my laptop (running Ubuntu 10.04 - 64 bit). This wasn't the case previously. I only found out because my wireless broadband traffic allowance suddenly was used up very quickly. I've installed ntop to try to find out where all this traffic is going to.
I did find that there were a very high number (at one stage over 11.000) of active TCP/UDP sessions (see attached screenshot). Although the traffic generated by each is only small (about 100 bits/bytes - not sure what) multiplied by thousands, makes a fair bit of traffic. I wonder if I've got some kind of a virus/bug or do I have a configuration problem with my laptop?
i've been using 10.04 since may and it's been good to me thus far. BUT recently, i've been experiencing unusually high system loads. it will happen at random it seems. for instance, it just did it a few minutes ago when i turned on and mounted a external firewire drive. i also had ogmrip open, but NOT running as well as transmission and pidgin. yesterday, i was encoding a rather big movie with ogmrip and around 25%, the system load quickly increased to 10 and held there until i force quit ogmrip. it went back to normal after the force quit.
a previous night, i was installing the popular ubuntu themes via the terminal. at the same time, i was also using ogmrip and the system load shot up during the download/install and held there the rest of the night. sooo...unless it's somehow related to ogmrip i don't know what's going on. it even did it when ogmrip WAS NOT processing any video, just simply open.
I wrote a script to extract and get the the name of *.gz in a foler . Since running that script every 10 minutes, load average on my server increases more than 10.I checked with 'top' and it showed many D process.
During downloads, the top command shows the Firefox process at 100% CPU. Yesterday I tried to download an .iso image. After a few hours the Firefox window would not refresh nor would it respond to input. I tried the wget command. It used negligible CPU time and completed in 28 minutes.
This problem is easy to reproduce because it happens every time I download a file in Firefox. It also happens when I use a fresh profile to run Firefox without any extensions or plugins.
I have High load on my server and my investigation shows nothing (so i believe that my investigations is wrong ), the load average in this moment is 10.13, 9.47, 8.24. , mentioning the below.
- The disk utilization (all the disks) is near 0, as the result of the IOSTAT - There is no blocked processes (as a result of VMSTAT). - I have two processors (dual core) , the maximum load average should be something around 4. - The server always have above 8 load average in all times interval.
btw , my OS is RHEL AS release 4 (Nahant Update 7)Kernel :Linux 2.6.9-78.ELhugemem #1 SMP i686 i686 i386 GNU/Linux
I have a firewall / proxy that has an extremely high load, but I can't figure out what's using it. No real cpu usage, the disks are sleeping except for a little log activity, it's on gigabit ethernet and not close to maxing out... Command link stuff runs fast, nothing seems slow, yet the load is sky-high. IME this kind of load is associated with a lot of disk I/O, but that's not the case here. What could be causing this, what else factors into the load?
uname -a:
Code:
Linux myfirewall.mydomain.com 2.6.8 #1 SMP Mon Oct 18 11:20:22 CDT 2004 i686 i686 i386 GNU/Linux top:
It all started about a week after upgrading to Jessie and I had an unusual system failure, in that the CPU went to 100% usage and the hard drive light was on constantly. The keyboard and mouse were were non-responsive. Not having REISUB enabled I did the "stupid" thing and pushed the reset button on the computer. BAD BOY! As a result the computer would not boot and I had to use a live CD to format the drive and install Wheezy (I had the CD).
After installing Wheezy, everything worked well for about 3 days and then it did the same thing. Fortunately I had REISUB enabled and was able to reboot. I looked at the syslog and found a segfault with colord-sane and, after some research that suggested colord-sane might be a problem, I set UseSane=1 in colord.conf . Things seemed to be okay for about 4 days.
Well, after all that I had another problem today with booting. During boot I got an error message saying that there was some hard drive problem and that I needed to log in and run fsck, which I did . There were I believe 4 INODE errors that I was asked if I wanted to repair, to which I responded yes. After that the system booted correctly. After booting and entering the Gnome Classic desktop I looked at the Disk Utility and checked the SMART data. There is now 1 bad sector where before there were none. The drive is a one year old WD 500GB Velociraptor.
Don't know if this is relevant, but in the days before this latest "crash" I had downloaded about 8 movies using bittorrent. Could this have overtaxed the HDD?
I guess my questions are: When fsck "repaired" the disk would it have moved any data from the bad sector to a new location? What may have caused the sector to go bad ? Should I be buying a new hard drive?
The system seems to boot okay,at this time, so I assume that no critical system files were affected. Just curious as to how I should proceed. First is BACK UP my data. Got that !
One more thing I just thought of is that every time it "crashed", I was using LXDE.
I have a server running samba process and there are about 70 samba users connected at a time. The system has 4Gb of memory and it seems each samba process is utilizing only 3352Kb of memory. When I run the command pmap -d (pid of samba)
But when I run the top command, it results as below: Tasks: 163 total, 1 running, 162 sleeping, 0 stopped, 0 zombie Cpu(s): 0.9% us, 4.9% sy, 0.0% ni, 93.3% id, 0.8% wa, 0.2% hi, 0.0% si Mem: 3895444k total, 3163192k used, 732252k free, 352344k buffers Swap: 2097144k total, 208k used, 2096936k free, 2487636k cached
Why could the system be utilizing such high memory? By the way, the server is not running other processes. The samba version running in it is 3.0.33-0.17.
I recently built my second general-purpose server, and recently installed fedora core 10 on it. The first thing I attempted to set up after installation was the network - and that's where it's gone wrong When editing a network device using the graphical system-config-network utility, I find that the subnet mask is being automatically changed to match the default gateway address every time I attempt to modify any of its settings (or sometimes even when I cancel the changes). This also means that I cannot set the subnet mask, as it simply won't accept my setting for it. I seem to be able to get around this glitch by setting the subnet mask using the shell version of the same utility, but that doesn't solve my network issue.
Even when I use the shell utility to fix the subnet mask, I'm unable to ping other computers or routers on the network even when ifconfig indicates that the desired ip address has been taken, and other computers on the network are also unable to see the server. I'm using a wired connection and a static IP address on a network with no DHCP.
I've been having a problem in Ubuntu 9.10 recently where starting about 2 minutes after startup my computer slows down and becomes unresponsive. I believe the problem is associated with a high IOWait because I have the system monitor applet on my Gnome Panel and it displays 100% IOWait every time my system starts to slow down. I have tried booting into other kernel version and the problem persists. I don't really know what IOWait is or how to diagnose this problem further. I've looked around online and it seems like you have to find a specific process that is causing the IOWait, but I don't understand how to go about doing that.
I have a debian system with the following version you can se bellow. My problem is that one one single core of 4 it's running 100% all the time and i cant seem to find out why. The load is also high(load average: 0.91, 0.75, 0.40) because of this. This keeps happening even after reboots. The system if freshly installed twice and the same problem occours as soon as it boots. Something called kworker is running and causing the load / CPU load, is that normal for a fresh install??
root@Cyberdyne:/# lsb_release -da No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.2 (jessie) Release: 8.2 Codename: jessie
on an old server of mine, as soon as apache is started, the load average that I get to see with 'top', that normally is under 1, now just steadily climbs up and up to easily 150, in fact disabling the webserver from serving any webpage. I've checked netstat, and I'll try to upload the output. The ip's that are in there I've blocked with iptables. But that doesn't help or so it seems. I see nothing weird in the error logs. As soon as I stop apache, the load goes back to normal. As soon as I (re)start it, up it goes again. What can cause this and how do I get rid of it?
p.s. It's an old server, fedora3 or so, and I've got a new one to which I'll transfer the domains, but until that's completely done, I'd like this one to run as it has for years...
The load average is almost 1.06, but cpu is not 100% utilized... I am just wondering why load average is 1.06 in that case still ? (well i monitored cpu for long time, it never exceeded above 40%)can anyone explain the reason behind it ? Also is the system over utilized in this case ?
I am having Red Hat LINUX 5 Enterprise Server and facing problem regarding very very high server load (load average is going high up to 60-70)due to which server is getting hang.
I have just started to have a problem with Xorg it is always using at least 30% of my CPU, and the whole system does not run smooth so if I play a video it does not run smooth, it judders, also even if I drag an icon it judders across the screen. Im running Ubuntu 10.10 2.6.35-25-generic x86_64 VGA compatible controller: nVidia Corporation G98M [GeForce G105M] (rev a2)
I need to test a code that needs to have high DiskIO and DiskBisy in a Linux environment. Is there any way that i can use to test this urgent need of mine.
Since upgrading from 9.10 to 10.04, my system is noticeably less responsive and exhibits halting behavior for at times 10s of seconds. There is nothing obvious in the various logs that I can find. The only objective indication I can find that demonstrates that something is seriously wrong is that my load average never drops below 0.5 and is often over 1.5, even at idle. In this situation, top/ps/whatever shows very few processes running (usually just top). This suggests to me that either the new kernel scheduler is horrible or that something new is resulting in blocking I/O or other uninterruptable sleeps.Typical top output:
I have a apache cluster with more than 10 nodes, based on ldirector and heartbeat. The problem is that I cannot predict if my nodes will handle the traffic in the next day (hosting a website based on daily campaigns). So I decided to limit the number of active connection on the nodes (from apache), but this is only a temporary solution. I want to create a page that will appear to users that are getting over the limit. Has anyone made this before? Can you tell me how is it possible (I don't want a how to, just a starting point to study)? I think squid can do it, but I don't know how to search for it. To give you an example of what I want, you can see the same thing on deviantart.
I recently installed Centos-5.3-x86_64 in a PowerEdge2950 machine, to serves as an ftp server using proftpd-1.3.2. I noticed that the CPU load is much higher than the days when I was using RHEL-4.7-X86_64. In the past, CPU load get higher than 100 only when there're about 400 online users. But in Centos-5.3, the CPU load gets higher than 150 when there're 200 users.
The details of the server machine: Intel Xeon E5430, 16 GB RAM, 1.8 TB as a RAID disk, 400GB hardisk used to installed the system.
I am running a CentOS 5.6 32 bit installation under VMWare ESXi and have been experiencing some very high load values from time to time. The server is running multiple installations of gameservers, and the load fluctuates from around 1 to 9, with more or less the same amount of players playing (give or take 5-10 of a possible 80). I've been running DStat with almost all possible metrics, and, except for the large fluctuations in load, nothing else out of the ordinary seems to happen when load is rising. (Log data can be provided, if anyone wants to see it). Disk IO, Network throughput, memory consumption, CPU usage, process count, all stay at the same levels when the load is 1 as when it is 9. How I can begin to troubleshoot this, and find out why the load goes to such high values?
I just posted about this in this thread, but as the other thread was started by a KDE user then I thought I'd post here as well. I've had high CPU usage for a few months now - probably since trying the 0.9 branch of Compiz then dropping back to the default openSUSE builds (XOrg and gconfd-2 running a Core i5 at about 30% on every core*). I've now finally found a solution after deciding I wanted to fix it once and for all.
Once again, the Ubuntu forums come to the rescue with this thread (I don't like the distro as a whole, but I do find the forums useful!). I'm using Compiz, but it turns out that Metacity was running as well. A quick "killall -9 metacity" and the gconfd-2 process has vanished and XOrg settled down to its normal 1-2% (which is reasonable when I've got a Conky config refreshing every fraction of a second to repaint a sound visualiser!). Now I just need to find out why Metacity starts when I'm using Compiz...
* according to Conky's per-core graphs, although top only reported 15% overall and the Conky "top 3 procs by CPU" reported a measly 3% for each process, so someone's maths was out somewhere!
I upgraded webserver to new ubuntu server 10.04 (x86-64). After upgrade the increased load from 0,3 to 1,4. On webserver running phpbb, which generating slow quieres, which not before upgrade to lucid. HW conf: Intel Core i7, 8GB ram, WD Raptor 10k rpm. Week17 upgrade to new version.
It's the fourth time now since Maverick that I had to cold reboot my system because it was totally unresponsive. The system monitor in my taskbar shows 100% on the background blue graph (I guess that's just one core then) and almost nothing on the other core. On the last freeze I managed to open 'sudo top' before it went totally unresponsive and saw that there was no high CPU usage at all, but a load average spiking above 24. Also the swap seemed to be full although my machine usually never uses swap. I was watching a movie with VLC this time, but I'm not sure if VLC ran the other times my OS froze. I made a snapshot with my cell phone: [URL]. How I can prevent a process from causing so much load?
I am having a problem with the server that I use to host my personal site. The load average quite often spikes to exceed 1.00 for the 1 and 5 minute intervals, and the 15 minute interval gets above .5. This occurs while the server is idle, serving very few or no requests and with the CPU 99% idle with <1% IOWAIT usage. I have checked top and vmstat, but neither one provides any useful info. Top continues to say the CPU is 99% idle, and vmstat says that there are 0 runnable and 0 blocking tasks. Occasionally, vmstat will say that there is 1 runnable task, but this doesn't even coincide with the load average spikes. I have already searched for other solutions to this problem, but everything I have seen says to use top and/or vmstat, but those aren't showing anything out of the ordinary. Can anyone recommend anything else I might do?
My server has a Pentium 4 HT 3gHz processor, 2GB RAM, and runs Kubuntu 10.10. (The reason it runs Kubuntu instead of Ubuntu Server is that it needs an X environment so that the Nvidia driver can initialize and put its graphics card into a power saving mode.)
I have a Nagios server with a lot of hosts and services; around 400 services (in all) and 150 hosts. Most of these services are programmed in bash language. The problem is the server has a high load average; between 5 and 11. The server has the next features:
- Intel Xeon 2.66GHz Dual Core - 4MB cache memory - 1GB RAM memory - 50GB hard disk
Is this load average normal? Should I program the plugins with C?
I have several CentOS 32-bit VMs running on ESX 4. Those that were updated to the most current patch level ("yum update", accepting all updates available last week) started showing load average of ~0.4 when completely idle. After comparing the problematic VMs with those that show zero load average at idle, and then modifying them in all kinds of ways, I narrowed it down to the combination of a recent kernel patch (2.6.18-164 is fine, 2.6.18-194 is not) and E1000 network adapter. Replacing the network adapter on a problematic VM with VMXNET3 fixes the load statistics.