I don't know about your computer but when mine is working properly no process is sucking 95%+ over time. I would like to have some failsafe that kills any processes behaving like that. This comes to mind because when I woke up this morning my laptop had been crunching all night long on a stray chromium child process.
This can probably be done as a cron job, but before I make it a full time job creating something like this I'd thought I should check here. :) I hate reinventing the wheel.
3861 user 20 0 904m 128m 33m S 0.7 6.4 1:11.52 xulrunner-bin 1323 user 20 0 1555m 95m 31m S 13.5 4.8 4:06.87 gnome-shell 3494 user 20 0 1028m 50m 21m S 12.8 2.5 1:43.32 evolution
I just wondering what is the difference between RES, SHR, and VIRT.
1) The VIRT always seems to be higher. Is this using the paging file system. (virtual memory on the harddisk, the swap memory)
2) Is the RES memory the actual physical RAM memory?
3) Is shared memory sharing memory with other processes?
4) Just a final question. As I am running on a HP Mini 210, memory and CPU is a resource I don't have a abundence of. So if was to compare for example 2 difference browsers i.e. firefox and midora. What should I brench mark between to 2 to find what one uses less resources?
I want a distro I can run on my very weak netbook, and perhaps on one or two other of my other computers as well. Netbook is an Asus eeePC 900SD (Celeron 800mhz, 512MB RAM, 8GB SDD, 1024x600 screen resolution), very slow with some distros, but nimble with others.
I've tried: Ubuntu Netbook Remix (EasyPeasy), Leeenux, JoliCloud: too resource hungry on this machine, too much storage consumed just for the OS. Peppermint OS - pros: works well, nice, very few bugs, fast.cons: space requirements, memory requirements make it a bit tight, cloud apps are slower than locally installed ones, the permanent inclusion of a paid-subscription cloud app, and fascist support forum moderators. a bit overweight, and way too cloud-centric - many of the cloud apps are on unreliable servers and not always available or slow down your netbook to a crawl while it waits for some executable code to come off the web. Puppeee version 1.0 (and Fluppy for all netbooks), works very well, very fast, in little RAM with little disk space required. Some may not like the overcrowded menus and their structure that's inherited from the parent Puppy. Puppy 5.1: works very well compared to the 4.3 series. wifi works now. But same menu comments as for Puppeee. Slitaz: at 30MB for the iso, it sounded promising, and the interface is very nice, much nicer than any of the other minimalistic distros. but Wifi? no help on the horizon. AntiX: some stuff just didn't work properly, including Wifi WPA. but it looked real good. For the space and memory requirements look to Peppermint. TinyMe2010: this is the size of Puppy, and polished like Peppermint. Based on a slimmed-down Unity, it is still in beta, the installer won't install from USB stick. If you have a CD to install from this is a great distro! Lets hope they fix the USB issue soon! Very promising... keep a watch on this one.
I've tried dozens of distros, and find it frustrating to deal with the various crippling flaws of some distros and the egos of the assemblers of other distros (where they can easily fix something but refuse to because they prefer an older faulty way). I am at my whit's end here. Please help me someone.
From us Noob's point of view: the new re-release of Windows XP for Legacy computers with only 64MB of RAM, it may be time to re-visit our thinking that minimalistic Linux distros are the only kid on the block for those slower machines with less resources. Time to get back to the drawing board and make these a little more user-welcoming. ;-)
I have a Ubuntu hosting a Windows XP box (using VirtualBox). The Windows XP box is connected to work using Check Point VPN-1. Essentially this enables me to go to my Windows box and do something like ping comp-at-work and it just works.
I would like to access the VPN resources from the Linux host though. The Windows guest is only there because the VPN client isn't working in Linux. If I could somehow ssh from the Linux host right into my computer at work (using remote desktop would also be great), that would save me a lot of round trips between my Linux host and the Windows guest.
If I forget to close a file, a socket or any other resource in a Linux process, and the the process terminates, will those resources be freed? Is there a difference if the process terminates normally or is killed?
I've got a ~/.Xdefaults that has a specific color theme defined for Xorg, and this works. I've got a ~/.XdefaultsNew that specifies an alternate color theme. Xorg starts and loads ~/.Xdefaults which is correct. After running some applications, I run
This overrides all my X resources to the new defined values (correctly). If I open a new window, the theme is seen correctly. However, all the previously opened windows retain the original theme.Is there a way to force X to "re-theme" all windows it is managing with the currently loaded X resources?
I had this error when installing and running a vncserver before, which I have now removed. However, the xterm's seem to remain in the system and are regenerating themselves. Should the pid IDs stay the same each time I run this?
I need to create a small list of processes in a monitor.conf file. A shell script needs to check the status of these processes and restart if they are down. This shell script needs to be run every couple of minutes.
The output of the shell script needs to be recorded in a log file.
So far I have created a blank monitor.conf file. I have gotten the shell script to automatically updated every couple of minutes The Shell script also sends some default test information to the log file.
how I go about doing this part ? A shell script needs to check the status of these processes and restart if they are down.
I have put in the conf file the below commands but I am not sure if this is right.
ps ax | grep httpd ps ax | grep apache
I also dont know if the shell script should read from the conf file or if the conf file should send information to the shell script file.
I know that you can modify the nice value of a particular process as follows:renice 19 -p 4567However, now I would be interested to set the renice value of ALL active processes.I am coming from the Win world so what I tried warenice 19 -p *Of course it is not working... Anyone a quick solution how to do that in Linux?
I am writing a code which communicates between 2 processes created by fork() statement. Parent reads a file and write the data into a shared memory and sends a signal to the child. The child then receives a signal from the parent to start reading. After finishing the read operation the child sends a signal to the parent asking it to resume its action. Some things are going wrong in my code.
1. segmentation error in memcpy() statement. 2. terminal hangs after running the code. 3. Synchronization problem between processes..
I have a question. I want to monitor - CPU usage daily - RAM usage daily - Harddisk Space - top processes - hardware failure
What commands do I need to run to output the result to a log file? I know there are solutions both paid and free, but my company does not allow. they want linux built in commands or methods to do it. I do not know bash scripting. I know some commands like "df -h" to monitor harddisk space but not sure on the other stuffs.
The following code is for monitoring the memory used by apache processes. But I got a problem that the data I got by this script is much larger than the physical memory. It was said that there are some libraries used simultaneously by many processes, so the data I got has some double counted part. Because apache has many httpd processes.
Anyone have an idea of getting the multi-processes memory used?
This script puts a natural number 5 times a second.
3. Then in the second bash window I type (as root):
The script test2 looks as follows:
While true; do true; done
During the following 15 seconds test2 is the process with the highest real-time priority. As far as I know the script doesn't perform any system calls so it shouldn't be suspended even for a minimal timeslice. My question is: why the process test1 manages to put a few numbers on the screen before test2 stops. I thought that test2 would exclusivly own the processor for 15 seconds.
I am studying for the LPIC-1 exam, and reading a book that they recommend: "Introduction to Linux: A Hands-on Guide", by Machtelt Garrels. There's one question on the 4th chapter (Processes), that I found confusing: Question: Based on process entries in /proc, owned by your UID, how would you work to find out which processes these actually represent?
What does he mean? If I run the command (considering that my username is sl33p): Code: $ps -u sl33p ...gives me the right answer?
The ps man page says: -u userlist Select by effective user ID (EUID) or name.
This selects the processes whose effective user name or ID is in userlist. The effective user ID describes the user whose file access permissions are used by the process (see geteuid(2)). Identical to U and --user.
What is the default nice value for processes?The setpriority() function sets the nice value of a process, all processes in a process group, or all processes for a specified user to the specified value. If the process is multi-threaded, the nice value affects all threads in the process. The default nice value is 0; lower nice values cause more favorable scheduling.
Can anyone explain to me why there are sometimes 10 or 15 processes with the same title and "stats" listed in htop? I'm guessing there are multiple threads running - but that many of them obviously couldn't be running concurrently.
Is there any sort of performance hit taken if a process uses say, 15 non-concurrent threads vs. 10 non-concurrent threads?
Sometimes you have a stuck process that's been stuck for a while, and as soon as you go to poke at it with strace/truss just to see what's going on, it gets magically unstuck and continues to run! So from merely 'observing' these programs have some impact in the running of the stuck programs .. what's happening here? Did strace (I guess via ptrace(2)?) send a signal, causing the program to cease blocking, or such?
I've seen this several times -- most recently on Linux RHEL 4 (and a Perl script mucking with processes and doing some network IO in that case), but in a few other contexts as well. Unfortunately, I can't reproduce this, as it times to happen ... in times of crisis. But my curiosity remains.
user@host$ killall -9 -u user Will it definitely kill all processes owned by user (including forkbombs)?
No new processes is spawned to user from other users. No user's processes are in D-sleep and unkillable.No processes are trying to detect and ptrace or terminate this started killall (but they can ptrace or do other things with each other) There is ulimit that prevents too much processes (but killall is already started and allocated it's memory)
E.g. if killall will finish untampered and successfully is it 100% that no processes are left with this uid? If no, how to do it properly (with standard commands and no root access). Will SysRq+I definitely kill all things (even replicating)?