General :: Strace / Truss Sometimes 'fix' Stuck Processes - Why Is So?
Apr 23, 2010
Sometimes you have a stuck process that's been stuck for a while, and as soon as you go to poke at it with strace/truss just to see what's going on, it gets magically unstuck and continues to run! So from merely 'observing' these programs have some impact in the running of the stuck programs .. what's happening here? Did strace (I guess via ptrace(2)?) send a signal, causing the program to cease blocking, or such?
I've seen this several times -- most recently on Linux RHEL 4 (and a Perl script mucking with processes and doing some network IO in that case), but in a few other contexts as well. Unfortunately, I can't reproduce this, as it times to happen ... in times of crisis. But my curiosity remains.
How option of strace are used with example, like strace -e trace=file,strace -e trace=process,strace -e trace=signal, strace -e trace=network, strace -e trace=ipc.
I want to run a program under ... something like strace, something like gdb. Let's call the something Fred. Every time I run a particular program under Fred, it detects when a system call is about to take place or a signal has occurred, and traps to Fred, which decides dynamically how to respond, whether the stimulus is a signal or an attempted open(), close(), read(), write(), socket(), connect(), listen(), select(), ioctl(), time(), or whatever. Although strace does this marvelously, what I'd like Fred to do is use code that I supply to doctor up what the subject program sees. In the case of write(), it should be able to modify what actually gets written. In other words, a hard shell environment around the program which completely mediates between the subject program and the outside world. I'll start with the source code for strace and gdb if I have to. But has this been done already?
I had this error when installing and running a vncserver before, which I have now removed. However, the xterm's seem to remain in the system and are regenerating themselves. Should the pid IDs stay the same each time I run this?
I need to create a small list of processes in a monitor.conf file. A shell script needs to check the status of these processes and restart if they are down. This shell script needs to be run every couple of minutes.
The output of the shell script needs to be recorded in a log file.
So far I have created a blank monitor.conf file. I have gotten the shell script to automatically updated every couple of minutes The Shell script also sends some default test information to the log file.
how I go about doing this part ? A shell script needs to check the status of these processes and restart if they are down.
I have put in the conf file the below commands but I am not sure if this is right.
ps ax | grep httpd ps ax | grep apache
I also dont know if the shell script should read from the conf file or if the conf file should send information to the shell script file.
I know that you can modify the nice value of a particular process as follows:renice 19 -p 4567However, now I would be interested to set the renice value of ALL active processes.I am coming from the Win world so what I tried warenice 19 -p *Of course it is not working... Anyone a quick solution how to do that in Linux?
I would like to get a log of all processes that are launched with the time that they were launched and the arguments they were launched with. Is this possible in Linux?
I am writing a code which communicates between 2 processes created by fork() statement. Parent reads a file and write the data into a shared memory and sends a signal to the child. The child then receives a signal from the parent to start reading. After finishing the read operation the child sends a signal to the parent asking it to resume its action. Some things are going wrong in my code.
1. segmentation error in memcpy() statement. 2. terminal hangs after running the code. 3. Synchronization problem between processes..
I have a question. I want to monitor - CPU usage daily - RAM usage daily - Harddisk Space - top processes - hardware failure
What commands do I need to run to output the result to a log file? I know there are solutions both paid and free, but my company does not allow. they want linux built in commands or methods to do it. I do not know bash scripting. I know some commands like "df -h" to monitor harddisk space but not sure on the other stuffs.
The following code is for monitoring the memory used by apache processes. But I got a problem that the data I got by this script is much larger than the physical memory. It was said that there are some libraries used simultaneously by many processes, so the data I got has some double counted part. Because apache has many httpd processes.
Anyone have an idea of getting the multi-processes memory used?
This script puts a natural number 5 times a second.
3. Then in the second bash window I type (as root):
Code:
The script test2 looks as follows:
Code:
While true; do true; done
During the following 15 seconds test2 is the process with the highest real-time priority. As far as I know the script doesn't perform any system calls so it shouldn't be suspended even for a minimal timeslice. My question is: why the process test1 manages to put a few numbers on the screen before test2 stops. I thought that test2 would exclusivly own the processor for 15 seconds.
that would show me at least any active ftp connects started with the ftp command, right? Is there then a way to use that to somehow kill any stuck sessions that are older than an hour?
I list all the instances of a running process my doing:ps -ef | grep myprogramThis lists all them.how can I simply output a count of how many are running?
I am studying for the LPIC-1 exam, and reading a book that they recommend: "Introduction to Linux: A Hands-on Guide", by Machtelt Garrels. There's one question on the 4th chapter (Processes), that I found confusing: Question: Based on process entries in /proc, owned by your UID, how would you work to find out which processes these actually represent?
What does he mean? If I run the command (considering that my username is sl33p): Code: $ps -u sl33p ...gives me the right answer?
The ps man page says: -u userlist Select by effective user ID (EUID) or name.
This selects the processes whose effective user name or ID is in userlist. The effective user ID describes the user whose file access permissions are used by the process (see geteuid(2)). Identical to U and --user.
What is the default nice value for processes?The setpriority() function sets the nice value of a process, all processes in a process group, or all processes for a specified user to the specified value. If the process is multi-threaded, the nice value affects all threads in the process. The default nice value is 0; lower nice values cause more favorable scheduling.
Can anyone explain to me why there are sometimes 10 or 15 processes with the same title and "stats" listed in htop? I'm guessing there are multiple threads running - but that many of them obviously couldn't be running concurrently.
Is there any sort of performance hit taken if a process uses say, 15 non-concurrent threads vs. 10 non-concurrent threads?
user@host$ killall -9 -u user Will it definitely kill all processes owned by user (including forkbombs)?
No new processes is spawned to user from other users. No user's processes are in D-sleep and unkillable.No processes are trying to detect and ptrace or terminate this started killall (but they can ptrace or do other things with each other) There is ulimit that prevents too much processes (but killall is already started and allocated it's memory)
E.g. if killall will finish untampered and successfully is it 100% that no processes are left with this uid? If no, how to do it properly (with standard commands and no root access). Will SysRq+I definitely kill all things (even replicating)?
I am developing a daemon that is acting up and I am now unable to create any new processes (ie. I cannot start a new process to kill the other rogue processes). So, I need to be able to kill the processes from a remote machine. How do I do "kill" remotely without admin privileges? If I cannot kill my own process from a remote machine as a normal user then tell me so I can mark it as the correct answer.
I have something like:cd project && python manage.py runserver &cd utilities && ./coffee_auto_compiler.pyAnd I want both of them to close on Ctrl-C (or some other command). How can I accomplish that?EDIT:I tried using jobs -x kill and kill `jobs -p `, but it doesn't seem to kill what I need. Here is what I mean:oon 8119 0.0 0.0 7556 3008 pts/0 S 13:17 0:00 /bin/bashmoon 8120 6.8 0.4 24568 18928 pts/0 S 13:17 0:00 python manage.py runserverjobs -p give me just process 8119, but I also need to close 8120, since it's the thing that the first command opened.If it helps, the commands are actually in a Makefile, and I want it to run two daemons at the same time (and somehow close them at the same time).
Using h allow me to hide the table header. Is there a way to tell ps to not print the pts/13 S+ 0:10 cmd part in order to get a list of children process ids separated by carriage return?
I have a linux server top reports about 9GB of swap used:But I cannot figure where's it use swap, some google results said that top - O commad follow by p will show swap usage by process. But as shown in the above image, taking a brief sum of the SWAP column shows that > 10GB of swap is used, so where does the 9GB figure for swap usage come from? Top reports that about 96492kb of ram is used by buffers. Is there anything I can do to utilize this, instead of using swap?