General :: Finding Processes Which Are Hogging Machine
Aug 22, 2011
All of a sudden, my computer feels sluggish. Mouse moves but windows take ages to open, etc. uptime says the load is 7.69 and raising. What is the fastest way to find out which process(es) are the cause of the load? Now, "top" and similar tools isn't the answer because they either show CPU or memory usage but not both at the same time. What I need is the single command which I might be able to type as it happens - something that will figure out any of: "System is trying to swap 8GB of RAM to disk because process X ..." or "process X seeks all over the disk" or "process X uses 400% CPU"
So what I'm looking for is iostat, htop/atop and similar tools run into one with an output like this:
1235 cp - Disk trashing
87 chrome - Uses 2GB of RAM
137 nfs_bench - Uses 95% of the network bandwidth
I don't want a tool that gives me some numbers which I can analyze but a tool that tells me exactly which process causes the current load. Assume that the user in front of the keyboard barely knows how to write "process" but is quickly overwhelmed when it comes to "resident size", "virtual memory" or "process life cycle".
My argument goes like this: User notices problem. There can be thousands of reasons ... well, almost. User wants to know source of problem. The current solutions give me lots of numbers and I need to know what these numbers mean. What I'm looking for is a meta tool. 99% of the data is irrelevant to the problem. So what the tool should do is look for processes which hog some resource and list only those along with "this process needs a lot of CPU, this produces many IRQs, this process allocates a lot of RAM (and it's still growing)".
This will be a relatively short list. It will be much more simple for an newbie to locate the culprit from this list than from the output of, say, htop which gives me about 5000 numbers but requires me to fold multi-threaded processes myself (I have 50 lines which say VIRT 2750M but only 16GB of RAM - the machine ought to swap itself to death but of course, this is a misinterpretation of the data that can happen quickly).
I want to check all the child processes attached to a specific process. Say for a example; my process is a java process. Then I wanna know what are the processes attached to that java process. At the moment I use the command # ps aux|grep java; this gives the details about the currently running java process. So I can kill the whole java process using #kill -9 {pid} ...but there are sub processes attached (I guess child processes) to java process. I wanna view all of them & kill whatever the process I like not the whole thing. I'm using Red hat 5 enterprise edition. and currently a weblogic application server is running on that.
I thought 'killall' would work, but I need to provide the "command" to kill. I'm really looking for a command that will kill all processes that have a particular file/directory open. Currently, my script fails on an 'umount' because there are several processes that have this filesystem open. The command 'lsof' is a good tool to determine which processes have a filesystem open, but I don't really want to write a script that parses through the 'lsof' output to capture PSIDs. Is there a linux command that can kill all processes that may have a particular filesystem open?
What is the best Linux Mint backup tool that is most like Time Machine (that ships on Macs)?
The one thing that I want it to have similar to Time Machine is that it only backs up files that have been changed, therefore making for faster backups.
I have a situation where I have several screen (/usr/bin/screen) sessions running. 2 of the screen sessions (ps1 and ps2) run a script that launched SipP with specific parameters. 1 script starts SipP and has it make 50 calls where the other makes only 20 calls. However The script is configured where we can change how many calls it makes if needed.
So the problem is, due to issues with SipP, we must restart everything every 12 hours (at maximum). So I am trying to work out scripts to stop the SipP processes cleanly. In order to do so I need to figure out which SipP process is spawned by which screen. i.e. which sipp was started by screen session ps1, and which one was started by screen session ps2.
Now I can do ps -ef | grep <number of calls configured> to find out but then I would have to change my stop script every time we reconfigure how many calls are made, and have a separate stop script for each screen session. I would much rather be able to send the screen name as a parameter to the stop script and have it work no matter how many calls SipP is configured to make.Also your standard kill -1 <PID> does not shutdown SipP cleanly. So working out those details is a bit more tricky. Anyone know how I can determine what processes are spawned from a specific screen session?
I have a desklet that, occasionally after toying with network stuff, will tell me that large amounts of data are being sent/received. What's a good way to determine what processes are occupying these resources?!
I did my homework and found those similar questions, but they seem to cover particular firefox addons. My scenario is different: I don't run a ton of addons, but still periodically CPU usage skyrockets to 100% (I have an old single core CPU). I wonder if it is possible to see which tab is the offending one. Generally I don't run a gazillion of tabs, I try to stick to the 7+/-2 common sense rule, but closing tabs one by one and watching the CPU usage is still not very convenient.
I just bought an SSL cert and installed on my Apache server. When I restarted something went wrong so I had to change some config stuff and when I tried to restart apache for the second time I got this:
$ sudo apache2ctl start (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs
Problem is that apache isn't running. For some reason there is something hogging my tcp 80 port, preventing apache from starting properly. How do I fix this? Is there a way to "free" a port?
i want to know mac address of a particular ip but the problem is that i am unable to ping that ip but that ip is being used by someone in my local network that i know from my proxy logs. i want to know the mac address of that ip,
I am looking for software that would allow me to do an automatic installation on a machine with some predetermined rpm and partitioning.To start I went to the package mkcd, but quite complicated to use, I can not quite understand what to do. In looking further I found Bcd, which from a predefined XML file creates an iso, it looks easier to use but I do not know if I can do exactly what I want.A bootable CD that installs linux automatically with rpm added.
I found other software that can match my search as "KickStart" but made for RedHat and rather poor in tutorial, there is nothing familiar in the field. "Fully Automatic Installation" [URL] which at first might work with Fedora and others.
I am looking for a tomcat startup script on a Ubuntu machine. My Ubuntu is 10.04 server. The tomcat is 5.5.30. It is in /opt/apache-tomcat-5.5.31 I tried a script here
I installed ubuntu 10.10 a few days ago which ran very well on the live CD but after I upgraded to 11.04 Ubuntu uses just under 100% of my cpu after a few mins loged in and my ram usage increases by about 5MB per sec starting the second I log in. I am using Classic Gnome and it seems to do this wether I have metacity or compiz turned on. Does anyone know what is going on or know about a way to lower ether my cpu or ram usage
Recently I have noticed that the process kacpi_notify is using constantly using 10-14% of my total CPU time, I am running an i7 @ 2.66Ghz. This causes the CPU temp to rise, and eventually the fan will kick in to cool it down, at which point kacpi_notify will stop using CPU time. Then about a minute later it will kick in again and use CPU time up constantly until the fan kicks in because of it. It sounds like something is triggering the process and then it is stuck in an endless loop. I just don't know where I can modify this so it doesn't start up all the time.Some tech stuff:
Does anyone else have an issue with Firefox memory hogging? If you open many tabs, it just seems to use up most of the memory and despite closing a tab it will not release it.You have to restart the browser every so often.
I am running a fairly standard 32bit 10.10 install, although I deleted Firefox and installed Chrome. I am using Transmission for my torrent downloads. After a lot of reading and trying different things, I have managed to get the port to open. When Transmission is running, even if only uploading and downloading a few k each way, I have great difficulty in getting Chrome to load a web page, it is as if Transmission is hogging all the available bandwidth? Obviously if I shut down Transmission the problem goes away. It also occurred with Firefox before I deleted it.
I am running Fedora 9 and KDE 4.2.1. I want to set up some traffic shaping on my machine to prevent my torrent client from hogging my entire bandwidth. I.e., I want KTorrent to download and upload to the best of its ability, but still be able to browse the net freely in spite of the torrents. I have done some reading about traffic shaping in Linux. There is lots of material about it, but most of it (such as the lartc.org "howto") is very complex and comprehensive and looks extremely intimidating. Furthermore, most of it addresses situations where you want to distribute traffic between multiple computers in a network. I just want to manage processes on a single machine. I am hoping for a piece of software that lets me assign each a "priority" to each application, or something like that. Like cFosSpeed for Windows.
During random moments, pidgin will suddenly use up all of my processor, a large chunk of my ram (20% - 25% of 2 GB), and will become completely unresponsive. This lasts for maybe four or five minutes then returns to normal.Pidgin doesn't have any kind of terminal output when ran in a terminal, so I have no data to use that is of any help.
I just installed jessie on a machine that had been running wheezy with no problems. Now I see that a kworker process is hogging nearly 100% of one of the CPUs. I am not sure how to proceed with solving the problem even after doing a number of Google searches.
I'm not sure if this is related, but I am getting the following when I run 'dmesg':
My hardware is: cpu: Intel(R) Core(TM)2 Duo CPU E6550 @ 2.33GHz, 2333 MHz Intel(R) Core(TM)2 Duo CPU E6550 @ 2.33GHz, 2000 MHz keyboard: /dev/input/event0 AT Translated Set 2 keyboard mouse: /dev/input/mice ImExPS/2 Logitech Explorer Mouse
[Code] ....
Here is the "top" display, showing 75.2% of the CPU on kworker/1:2 and 27.6% of the CPU on kworker/1.1:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4731 root 20 0 0 0 0 R 72.5 0.0 0:53.73 kworker/1:2 28 root 20 0 0 0 0 S 27.6 0.0 0:58.69 kworker/1:1 1246 dan 20 0 1668476 132720 57548 S 2.7 4.3 0:42.33 gnome-shell 4673 dan 20 0 855208 158368 65568 S 2.7 5.2 0:28.44 iceweasel 815 root 20 0 201804 29020 18728 S 1.0 0.9 0:14.30 Xorg
I successfully installed the virtual box on my fedora 8 system, and also created a virtual machine with windows xp OS, it works nicely, I try to configure the serial port of my virtual machine and try to configure the path for the port "screen shot are attached" it gives me the error message also the "screen shot are attached" for your review.Is kind of mistake is going on during the path setting, and how to set the path for configuring the serial port of my virtual machine so that I can use the hyper terminal tool of windows.
I have some file tools on a mint machine that I would rather not install on my mac laptop. Mainly because of the vastness of apt-get and the low risk of installation failure. Anyway, every so often I have a file that I want to process in place using some remote tool. Both machines can ssh right in to each other so I was figuring there must be some script or tool out there that would allow me to type out something like remote [file] [tool & args] to send my file to the other machine, get it processed, then get it back.
I'm know very little about Linux but decided to set up a machine running Drupal CMS on a Debian machine and it won't go. The folks at Drupal have tried to help but it seems the Debian OS won't do it's PHP thing for Drupal.
That means i'll have to start at the START I guess.
how to become a master of Linux if one is starting from ABC (I can add and subtract, that's what it feels like)
I'm the Administrating the computers in my office. I want to monitor the user's activity. How can i remote login without distrubing the user's activity on his computer? Any software need to be installed? (I don't want to use Terminal server client).
I had this error when installing and running a vncserver before, which I have now removed. However, the xterm's seem to remain in the system and are regenerating themselves. Should the pid IDs stay the same each time I run this?
I need to create a small list of processes in a monitor.conf file. A shell script needs to check the status of these processes and restart if they are down. This shell script needs to be run every couple of minutes.
The output of the shell script needs to be recorded in a log file.
So far I have created a blank monitor.conf file. I have gotten the shell script to automatically updated every couple of minutes The Shell script also sends some default test information to the log file.
how I go about doing this part ? A shell script needs to check the status of these processes and restart if they are down.
I have put in the conf file the below commands but I am not sure if this is right.
ps ax | grep httpd ps ax | grep apache
I also dont know if the shell script should read from the conf file or if the conf file should send information to the shell script file.
I know that you can modify the nice value of a particular process as follows:renice 19 -p 4567However, now I would be interested to set the renice value of ALL active processes.I am coming from the Win world so what I tried warenice 19 -p *Of course it is not working... Anyone a quick solution how to do that in Linux?