General :: What Is The Default Nice Value For Processes
Nov 17, 2010
What is the default nice value for processes?The setpriority() function sets the nice value of a process, all processes in a process group, or all processes for a specified user to the specified value. If the process is multi-threaded, the nice value affects all threads in the process. The default nice value is 0; lower nice values cause more favorable scheduling.
My cpu is set via Gnome-power manager to automatically speedstep under demand....
the thing is i have a few nice level 19 processes running most the time that eat up all the idle time, this forces my clock speed up and as such makes the fan noisy and uses more power...
basically what i would like to do is to ignore process load with a nice over a certian level when determining weather to speed step.
So some flash games don't seem to play nice with Ubuntu. Some do. I have no problem playing Cursed Treasure, but when I try to play, say, Learn To Fly, the opening page loads, but then when I click the "play" link, it does nothing.
Each day I need to copy N files from a source location to a mirror at a specific time (where N is very large). Let's say I tell multiple CPUs to each run an rsync simultaneously on a subset of the files (network and disk bandwidth are not an issue). Ideally each CPU would be responsible for a disjoint subset of the N files, but in practice this is sometimes hard to guarantee. (Some of the source files might be "claimed" by more than one CPU.) As a result, sometimes rsync I and rsync J will both try to copy file F at the same time.
Using rsync -avz --delete --temp-dir=/tmp remote:/path/to/source/ /path/to/dest/, let's say rsyncs I and J both see this situation to start:
[Code]...
Now rsync J finishes copying but generates an error when it tries to move its version of FileB to /path/to/dest/ because there's already another FileB there that it didn't see when it started.
Does one of rsync's many options somehow handle this situation? Ideally I'd like an option that tells rsync, "Believe in yourself. You can do no wrong. Feel free to overwrite anything your little heart desires." so that it wouldn't complain about the FileB that has suddenly appeared mid-execution
In principle my java installation works perfect now I tried to increase priority with nice. So I put special user rights to nice: chmod u+s /usr/bin/nice
And then tried: nice -n -20 java something but this gives me the error that libjli.so is missing.
I had this error when installing and running a vncserver before, which I have now removed. However, the xterm's seem to remain in the system and are regenerating themselves. Should the pid IDs stay the same each time I run this?
I need to create a small list of processes in a monitor.conf file. A shell script needs to check the status of these processes and restart if they are down. This shell script needs to be run every couple of minutes.
The output of the shell script needs to be recorded in a log file.
So far I have created a blank monitor.conf file. I have gotten the shell script to automatically updated every couple of minutes The Shell script also sends some default test information to the log file.
how I go about doing this part ? A shell script needs to check the status of these processes and restart if they are down.
I have put in the conf file the below commands but I am not sure if this is right.
ps ax | grep httpd ps ax | grep apache
I also dont know if the shell script should read from the conf file or if the conf file should send information to the shell script file.
I know that you can modify the nice value of a particular process as follows:renice 19 -p 4567However, now I would be interested to set the renice value of ALL active processes.I am coming from the Win world so what I tried warenice 19 -p *Of course it is not working... Anyone a quick solution how to do that in Linux?
I would like to get a log of all processes that are launched with the time that they were launched and the arguments they were launched with. Is this possible in Linux?
I am writing a code which communicates between 2 processes created by fork() statement. Parent reads a file and write the data into a shared memory and sends a signal to the child. The child then receives a signal from the parent to start reading. After finishing the read operation the child sends a signal to the parent asking it to resume its action. Some things are going wrong in my code.
1. segmentation error in memcpy() statement. 2. terminal hangs after running the code. 3. Synchronization problem between processes..
I have a question. I want to monitor - CPU usage daily - RAM usage daily - Harddisk Space - top processes - hardware failure
What commands do I need to run to output the result to a log file? I know there are solutions both paid and free, but my company does not allow. they want linux built in commands or methods to do it. I do not know bash scripting. I know some commands like "df -h" to monitor harddisk space but not sure on the other stuffs.
The following code is for monitoring the memory used by apache processes. But I got a problem that the data I got by this script is much larger than the physical memory. It was said that there are some libraries used simultaneously by many processes, so the data I got has some double counted part. Because apache has many httpd processes.
Anyone have an idea of getting the multi-processes memory used?
This script puts a natural number 5 times a second.
3. Then in the second bash window I type (as root):
Code:
The script test2 looks as follows:
Code:
While true; do true; done
During the following 15 seconds test2 is the process with the highest real-time priority. As far as I know the script doesn't perform any system calls so it shouldn't be suspended even for a minimal timeslice. My question is: why the process test1 manages to put a few numbers on the screen before test2 stops. I thought that test2 would exclusivly own the processor for 15 seconds.
that would show me at least any active ftp connects started with the ftp command, right? Is there then a way to use that to somehow kill any stuck sessions that are older than an hour?
I list all the instances of a running process my doing:ps -ef | grep myprogramThis lists all them.how can I simply output a count of how many are running?
I am studying for the LPIC-1 exam, and reading a book that they recommend: "Introduction to Linux: A Hands-on Guide", by Machtelt Garrels. There's one question on the 4th chapter (Processes), that I found confusing: Question: Based on process entries in /proc, owned by your UID, how would you work to find out which processes these actually represent?
What does he mean? If I run the command (considering that my username is sl33p): Code: $ps -u sl33p ...gives me the right answer?
The ps man page says: -u userlist Select by effective user ID (EUID) or name.
This selects the processes whose effective user name or ID is in userlist. The effective user ID describes the user whose file access permissions are used by the process (see geteuid(2)). Identical to U and --user.
I am asked to look for a very nice sms gateway with a good GUI be it commercial or free. where can I find a nice smsgateway with GUI? I looked at kannel but it didn't mention any thing like GUI so am not sure I will go for it.
I have freshly installed an Ubuntu 10.04.1 by internet.
All is running well, but my nice values are all set to 0. Is there a script that handles these at boot time? How can I reset them to "appropriate" or "normal" levels (e.g. not all at 0)? I know in other installs, my nice levels vary depending on the process and the user.
Attached is a screen shot of my gnome-system-monitor, and aside from init, which I had set to -15, all others are at 0.
Computer specifications: Linux AMD-LNX000 2.6.32-25-generic #44-Ubuntu SMP Fri Sep 17 20:05:27 UTC 2010 x86_64 GNU/Linux AMD Athlon 64 Processor 3000+
Can anyone explain to me why there are sometimes 10 or 15 processes with the same title and "stats" listed in htop? I'm guessing there are multiple threads running - but that many of them obviously couldn't be running concurrently.
Is there any sort of performance hit taken if a process uses say, 15 non-concurrent threads vs. 10 non-concurrent threads?
Sometimes you have a stuck process that's been stuck for a while, and as soon as you go to poke at it with strace/truss just to see what's going on, it gets magically unstuck and continues to run! So from merely 'observing' these programs have some impact in the running of the stuck programs .. what's happening here? Did strace (I guess via ptrace(2)?) send a signal, causing the program to cease blocking, or such?
I've seen this several times -- most recently on Linux RHEL 4 (and a Perl script mucking with processes and doing some network IO in that case), but in a few other contexts as well. Unfortunately, I can't reproduce this, as it times to happen ... in times of crisis. But my curiosity remains.
user@host$ killall -9 -u user Will it definitely kill all processes owned by user (including forkbombs)?
No new processes is spawned to user from other users. No user's processes are in D-sleep and unkillable.No processes are trying to detect and ptrace or terminate this started killall (but they can ptrace or do other things with each other) There is ulimit that prevents too much processes (but killall is already started and allocated it's memory)
E.g. if killall will finish untampered and successfully is it 100% that no processes are left with this uid? If no, how to do it properly (with standard commands and no root access). Will SysRq+I definitely kill all things (even replicating)?
I am developing a daemon that is acting up and I am now unable to create any new processes (ie. I cannot start a new process to kill the other rogue processes). So, I need to be able to kill the processes from a remote machine. How do I do "kill" remotely without admin privileges? If I cannot kill my own process from a remote machine as a normal user then tell me so I can mark it as the correct answer.