i've been using 10.04 since may and it's been good to me thus far. BUT recently, i've been experiencing unusually high system loads. it will happen at random it seems. for instance, it just did it a few minutes ago when i turned on and mounted a external firewire drive. i also had ogmrip open, but NOT running as well as transmission and pidgin. yesterday, i was encoding a rather big movie with ogmrip and around 25%, the system load quickly increased to 10 and held there until i force quit ogmrip. it went back to normal after the force quit.
a previous night, i was installing the popular ubuntu themes via the terminal. at the same time, i was also using ogmrip and the system load shot up during the download/install and held there the rest of the night. sooo...unless it's somehow related to ogmrip i don't know what's going on. it even did it when ogmrip WAS NOT processing any video, just simply open.
After changing servers and moving things around, I have one system which serves the /home directories, and for some reason, it is extremely slow. I've looked for duplicate IPs, signs of a HW problem, etc. and cannot figure out why the system is running with a high load, high iowaits, and low responsiveness. The servers and clients are slackware 12.2 and 13.1 respectively.
I have a debian system with the following version you can se bellow. My problem is that one one single core of 4 it's running 100% all the time and i cant seem to find out why. The load is also high(load average: 0.91, 0.75, 0.40) because of this. This keeps happening even after reboots. The system if freshly installed twice and the same problem occours as soon as it boots. Something called kworker is running and causing the load / CPU load, is that normal for a fresh install??
root@Cyberdyne:/# lsb_release -da No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.2 (jessie) Release: 8.2 Codename: jessie
Since upgrading from 9.10 to 10.04, my system is noticeably less responsive and exhibits halting behavior for at times 10s of seconds. There is nothing obvious in the various logs that I can find. The only objective indication I can find that demonstrates that something is seriously wrong is that my load average never drops below 0.5 and is often over 1.5, even at idle. In this situation, top/ps/whatever shows very few processes running (usually just top). This suggests to me that either the new kernel scheduler is horrible or that something new is resulting in blocking I/O or other uninterruptable sleeps.Typical top output:
I upgraded webserver to new ubuntu server 10.04 (x86-64). After upgrade the increased load from 0,3 to 1,4. On webserver running phpbb, which generating slow quieres, which not before upgrade to lucid. HW conf: Intel Core i7, 8GB ram, WD Raptor 10k rpm. Week17 upgrade to new version.
It's the fourth time now since Maverick that I had to cold reboot my system because it was totally unresponsive. The system monitor in my taskbar shows 100% on the background blue graph (I guess that's just one core then) and almost nothing on the other core. On the last freeze I managed to open 'sudo top' before it went totally unresponsive and saw that there was no high CPU usage at all, but a load average spiking above 24. Also the swap seemed to be full although my machine usually never uses swap. I was watching a movie with VLC this time, but I'm not sure if VLC ran the other times my OS froze. I made a snapshot with my cell phone: [URL]. How I can prevent a process from causing so much load?
I am having a problem with the server that I use to host my personal site. The load average quite often spikes to exceed 1.00 for the 1 and 5 minute intervals, and the 15 minute interval gets above .5. This occurs while the server is idle, serving very few or no requests and with the CPU 99% idle with <1% IOWAIT usage. I have checked top and vmstat, but neither one provides any useful info. Top continues to say the CPU is 99% idle, and vmstat says that there are 0 runnable and 0 blocking tasks. Occasionally, vmstat will say that there is 1 runnable task, but this doesn't even coincide with the load average spikes. I have already searched for other solutions to this problem, but everything I have seen says to use top and/or vmstat, but those aren't showing anything out of the ordinary. Can anyone recommend anything else I might do?
My server has a Pentium 4 HT 3gHz processor, 2GB RAM, and runs Kubuntu 10.10. (The reason it runs Kubuntu instead of Ubuntu Server is that it needs an X environment so that the Nvidia driver can initialize and put its graphics card into a power saving mode.)
I wrote a script to extract and get the the name of *.gz in a foler . Since running that script every 10 minutes, load average on my server increases more than 10.I checked with 'top' and it showed many D process.
on an old server of mine, as soon as apache is started, the load average that I get to see with 'top', that normally is under 1, now just steadily climbs up and up to easily 150, in fact disabling the webserver from serving any webpage. I've checked netstat, and I'll try to upload the output. The ip's that are in there I've blocked with iptables. But that doesn't help or so it seems. I see nothing weird in the error logs. As soon as I stop apache, the load goes back to normal. As soon as I (re)start it, up it goes again. What can cause this and how do I get rid of it?
p.s. It's an old server, fedora3 or so, and I've got a new one to which I'll transfer the domains, but until that's completely done, I'd like this one to run as it has for years...
The load average is almost 1.06, but cpu is not 100% utilized... I am just wondering why load average is 1.06 in that case still ? (well i monitored cpu for long time, it never exceeded above 40%)can anyone explain the reason behind it ? Also is the system over utilized in this case ?
I am having Red Hat LINUX 5 Enterprise Server and facing problem regarding very very high server load (load average is going high up to 60-70)due to which server is getting hang.
Just noticed that during calls skype uses ~30-50% cpu, + pulseaudio uses ~20%. I found some old threads on this like [URL] Or [URL]. The former suggests to purge pulse and use alsa/oss instead. The latter suggests changing mic in skype to DeviceXX and tweak the pulse.conf There's also a launchpad bug on that with status Confirmed->invalid (due to "problem fixed in skype 2.1 beta") but looks like it's not (I am using 2.1 beta)
1. Does any1 has this problem? 2. I can't try 2nd link's approach as my skype has only PulseAudio in the settings, so I can't select Device or anything else! 3. Should I try to remove pulse in favor of ALSA? I'm pretty happy with pulse otherwise that this issue, I use it to record sound out of my sound card and I'm not sure if I can do this with alsa/os.
During downloads, the top command shows the Firefox process at 100% CPU. Yesterday I tried to download an .iso image. After a few hours the Firefox window would not refresh nor would it respond to input. I tried the wget command. It used negligible CPU time and completed in 28 minutes.
This problem is easy to reproduce because it happens every time I download a file in Firefox. It also happens when I use a fresh profile to run Firefox without any extensions or plugins.
I have a apache cluster with more than 10 nodes, based on ldirector and heartbeat. The problem is that I cannot predict if my nodes will handle the traffic in the next day (hosting a website based on daily campaigns). So I decided to limit the number of active connection on the nodes (from apache), but this is only a temporary solution. I want to create a page that will appear to users that are getting over the limit. Has anyone made this before? Can you tell me how is it possible (I don't want a how to, just a starting point to study)? I think squid can do it, but I don't know how to search for it. To give you an example of what I want, you can see the same thing on deviantart.
I have High load on my server and my investigation shows nothing (so i believe that my investigations is wrong ), the load average in this moment is 10.13, 9.47, 8.24. , mentioning the below.
- The disk utilization (all the disks) is near 0, as the result of the IOSTAT - There is no blocked processes (as a result of VMSTAT). - I have two processors (dual core) , the maximum load average should be something around 4. - The server always have above 8 load average in all times interval.
btw , my OS is RHEL AS release 4 (Nahant Update 7)Kernel :Linux 2.6.9-78.ELhugemem #1 SMP i686 i686 i386 GNU/Linux
I recently installed Centos-5.3-x86_64 in a PowerEdge2950 machine, to serves as an ftp server using proftpd-1.3.2. I noticed that the CPU load is much higher than the days when I was using RHEL-4.7-X86_64. In the past, CPU load get higher than 100 only when there're about 400 online users. But in Centos-5.3, the CPU load gets higher than 150 when there're 200 users.
The details of the server machine: Intel Xeon E5430, 16 GB RAM, 1.8 TB as a RAID disk, 400GB hardisk used to installed the system.
I am running a CentOS 5.6 32 bit installation under VMWare ESXi and have been experiencing some very high load values from time to time. The server is running multiple installations of gameservers, and the load fluctuates from around 1 to 9, with more or less the same amount of players playing (give or take 5-10 of a possible 80). I've been running DStat with almost all possible metrics, and, except for the large fluctuations in load, nothing else out of the ordinary seems to happen when load is rising. (Log data can be provided, if anyone wants to see it). Disk IO, Network throughput, memory consumption, CPU usage, process count, all stay at the same levels when the load is 1 as when it is 9. How I can begin to troubleshoot this, and find out why the load goes to such high values?
I just posted about this in this thread, but as the other thread was started by a KDE user then I thought I'd post here as well. I've had high CPU usage for a few months now - probably since trying the 0.9 branch of Compiz then dropping back to the default openSUSE builds (XOrg and gconfd-2 running a Core i5 at about 30% on every core*). I've now finally found a solution after deciding I wanted to fix it once and for all.
Once again, the Ubuntu forums come to the rescue with this thread (I don't like the distro as a whole, but I do find the forums useful!). I'm using Compiz, but it turns out that Metacity was running as well. A quick "killall -9 metacity" and the gconfd-2 process has vanished and XOrg settled down to its normal 1-2% (which is reasonable when I've got a Conky config refreshing every fraction of a second to repaint a sound visualiser!). Now I just need to find out why Metacity starts when I'm using Compiz...
* according to Conky's per-core graphs, although top only reported 15% overall and the Conky "top 3 procs by CPU" reported a measly 3% for each process, so someone's maths was out somewhere!
I have a Nagios server with a lot of hosts and services; around 400 services (in all) and 150 hosts. Most of these services are programmed in bash language. The problem is the server has a high load average; between 5 and 11. The server has the next features:
- Intel Xeon 2.66GHz Dual Core - 4MB cache memory - 1GB RAM memory - 50GB hard disk
Is this load average normal? Should I program the plugins with C?
I have several CentOS 32-bit VMs running on ESX 4. Those that were updated to the most current patch level ("yum update", accepting all updates available last week) started showing load average of ~0.4 when completely idle. After comparing the problematic VMs with those that show zero load average at idle, and then modifying them in all kinds of ways, I narrowed it down to the combination of a recent kernel patch (2.6.18-164 is fine, 2.6.18-194 is not) and E1000 network adapter. Replacing the network adapter on a problematic VM with VMXNET3 fixes the load statistics.
I have a firewall / proxy that has an extremely high load, but I can't figure out what's using it. No real cpu usage, the disks are sleeping except for a little log activity, it's on gigabit ethernet and not close to maxing out... Command link stuff runs fast, nothing seems slow, yet the load is sky-high. IME this kind of load is associated with a lot of disk I/O, but that's not the case here. What could be causing this, what else factors into the load?
uname -a:
Code:
Linux myfirewall.mydomain.com 2.6.8 #1 SMP Mon Oct 18 11:20:22 CDT 2004 i686 i686 i386 GNU/Linux top:
I am trying to setup a High-Availability HTTP Load Balancer With HAProxy & Heartbeat using the below links.
I have all RHEL 5.4 servers hosted on VMWare.
[url] [url]
This is the scenario, as given in the links as wells as my setup.
Load Balancer 1
Load Balancer 2
Web Server 1
Web Server 2
I have followed all the steps mentioned in the links religiously except the 2.2 here, in which it is asking to configure the vhosts. I could not really understand , what is to be placed in /etc/httpd/conf.d/vhosts.conf file and in which Web Server.
Due to this step only, I think I am failing in Failover test given in Point 4.1 here. I am able to open the webpage by [url] which gives the content of Web Server 1 (http1.example.com). But, when I try to shutdown the http service (to check failover), it does not shows the contents of Web Server 2 (http2.example.com)
Although, I am able to succeed in Failover Test 4.2, in which shared IP 192.168.0.120 switches when I try to start/stop the any of the Load Balancers.
how to setup an Active/Active Load Balanced and High Available (If one of the nodes is down the system still runs) MySQL cluster. I have found quite a few howto's but I have some things unclear in my mind. I found a few solutions like this one: [URL] or this: [URK] Those are using two or four MySQL nodes, two Load Balancers to avoid a single point of failure but only one MySQL cluster management server. What happens if the MySQL cluster management fails?
I have also found a "MySQL Master-Master Circular Replication" technique but from what I read, with this option there is a chance that conflicts will arise if node A and node B both insert an auto-incrementing key on the same table.
I've been having a problem in Ubuntu 9.10 recently where starting about 2 minutes after startup my computer slows down and becomes unresponsive. I believe the problem is associated with a high IOWait because I have the system monitor applet on my Gnome Panel and it displays 100% IOWait every time my system starts to slow down.
I have tried booting into other kernel version and the problem persists. I don't really know what IOWait is or how to diagnose this problem further. I've looked around online and it seems like you have to find a specific process that is causing the IOWait, but I don't understand how to go about doing that.
We use a linux box for routing and traffic shaping (we have a few thousand ip addresses routed through this box) and the soft interrupt load is very high.We use linux kernel 2.6.32.7 on an Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz.Cpu1 serves an internal network interface, and Cpu3 serves a single physical network interface (with 2 vlans):
So as the title says shutdown time takes unusually long. Upwards of 4-6 minutes. I'm used to my linux systems taking about 10 seconds to shutdown, 20 tops. In fact this problem seemed to stem from the natty release because I didn't have this issue with the betas or 10.10....
It seems the longer I use my laptop the longer it takes to shutdown. When I do a quick task like a file backup the shutdown takes 10seconds or so, but if I open up a browser and start surfing for a few hours it takes more like 5 minutes.
I have been using ubuntu for 4 years now on my decent laptop with 2 Gb RAM, dual core centrino, etc etc. Yet, in all those years I have been using this superior OS, I still have to do hard shutdowns because some program runs wild. I have lately 2 scenarios where I have to interfere with the process:
1: amarok crashes and leaves the python script for the gnome shortcut keys running at 100% CPU. Or: thunderbird-bin keeps running after apparently clean close of Thunderbird. That's not really bothering me, I just kill both processes.
The bigger problem is scenario 2: 2: VLC starts eating all my RAM (for no reason), my SWAP starts filling and my computer becomes unusable for 10 minutes. Or: my matlab script is too big and eats up too much RAM -> same. Note: I have nothing against SWAP because at many other times it's very useful
These are stupid and annoying problems where there is an easy solution:
1) automatically kill the stupid process that runs at 100% CPU 2) automatically kill the stupid process that eats up all my RAM