I need a way to kill off the pids 2819, 2820 because they do not have a process tied to them like pids 2918, 2922 and 6657. The way it works is peek shell (pid 2918)is opened then it starts a ksh (pid 2922) session then from there the end user runs a command (pid 6657).
which in theory should pull *only* the PID of "mono user1.exe" and kill only that. The problem: It kills any and every single instance of mono that is running on my system, every userx.exe thats open. I am confused, as a simple "ps aux | grep 'mono user1.exe'" does only return the mono user1.exe process and not the others. "ps aux | grep 'mono'" returns them all though. how I can modify that script so that it only kills the specific process? Would "pkill -9 -f 'mono MCuser1.exe'" work as well - or would it too kill every instance of mono? I cant do a lot more of trial and error, its not good I am killing those instances accidently...
I'm looking for a way in Perl to be able to take a list of servers, ssh multiple commands to it and store the results. If I do this process serially, sometimes one server will hang the whole script and if it doesn't, it still takes hours to complete.
I'm thinking what I need to do is make a parent loop that calls out a separate process that passes the server name to the child sub process and then executes all the commands I have defined in its own process. If one server 'hangs', at least that won't stop the script from doing all the other servers in the list.
I'm guessing using the fork() command would serve me best, however, all the online descriptions I have found have been vague at best.
I have written a simple script which has to find required patterns from a bunch of files ( where each file is around 2 GB each,which contain the output of seq 1 10000000000000) on an 8 core machine.I am current forking 6 child processes which run simultaneously on 6 cores of the processor & have to search for the required pattern in 6 different files & inform the parent process when a pattern is found using a PIPE.
The problem is,when a child process is done reading a text file looking for a pattern,it is becoming a zombie process.It exits cleanly when i put a $SIG{CHLD} = "IGNORE"; in the script.Can any one tell me whats going on & how do i improve the communication between child and parent processes?
I have an issue on one of my servers whereby the [normally very helpful] du and tar programs are somehow using up too much or my system resources (du 40% mem, tar 20% mem) and causing problems. I am after a command which is able to kill a process without knowledge of a PID but by process name e.g. "du" and memory usage e.g. >= 10%.
Something along the lines of: kill $(pgrep du) grep %MEM > 10
Although I know that is invalid syntax I cannot fathom the correct/best way to achieve this end!
that would show me at least any active ftp connects started with the ftp command, right? Is there then a way to use that to somehow kill any stuck sessions that are older than an hour?
I am developing a daemon that is acting up and I am now unable to create any new processes (ie. I cannot start a new process to kill the other rogue processes). So, I need to be able to kill the processes from a remote machine. How do I do "kill" remotely without admin privileges? If I cannot kill my own process from a remote machine as a normal user then tell me so I can mark it as the correct answer.
I'm working with Eclipse and it's starting to misbehave now and then which completely freezes my computer. Is there any emergency command to kill such a misbehaving process so I don't have to reboot my computer?
I already have a emergency xkill icon in my taskbar and a [Ctrl]+[F1] console with "> sudo killall eclipse" pretyped(!) but sometimes it's even to late for this. What I would need is a emergency command/console that gets a guaranteed amount of process time so I can kill these process.
I've run into what is apparently an age-old SSH problem, which is that killing an ssh client process does not kill the remote process (unlike e.g. rsh). There seem to be lots of patches and a couple of open bugs on this topic that have been there for about 10 years or so... Having convinced myself by googling that there is no easy solution, I'm now looking for a workaround of some sort. I'm writing a testing framework so the processes I'm running remotely could be anything at all, i.e. I only have control of the client side. Also the remote processes are of course highly unstable and I need to be able to terminate them if they hang. ssh -t won't work for me as I don't necessarily have a terminal. Finding the remote process ID would be enough so I can do ssh <machine> kill <pid>, but I don't see any way to do that either. Just using ps, pgrep etc seems to suffer from not being able to uniquely identify the correct process, and killing the wrong process is of course very bad.
All the kill idle user processes scripts I've seen don't take into account that the user might have multiple sessions open. Such is the case with one of our clients. Currently, every hour or two I need to do the following:
This will get the TTY and idle time for all users.
For each idle time over a half hour, I do the following (TTY is the TTY from the previous command with a space.
I then kill those processes.
There must be a way to do this automatically in a bash or perl script. I've tried both, but can't seem to get things to work properly.
i was referring to an article given in following website.[URL] I was surprise to know that i can kill all running processes by using kill 0. However when i tried running the command nothing happened.
my machine details:
Code:
# lsb_release -a LSB Version: :core-3.1-ia32:core-3.1-noarch:graphics-3.1-ia32:graphics-3.1-noarch Distributor ID: EnterpriseEnterpriseServer Description: Enterprise Linux Enterprise Linux Server release 5.2 (Carthage) Release: 5.2 Codename: Carthage
I'm attempting to use 'killall' to kill all mysql processes, however after using the command mysql processes are still alive. 'killall mysql' says no processes were killed, and while 'killall mysql_safe' gives no message there are still mysql processes alive afterwards.
Code:
# killall mysql mysql: no process killed # killall mysqld_safe
I don't know about your computer but when mine is working properly no process is sucking 95%+ over time. I would like to have some failsafe that kills any processes behaving like that. This comes to mind because when I woke up this morning my laptop had been crunching all night long on a stray chromium child process.
This can probably be done as a cron job, but before I make it a full time job creating something like this I'd thought I should check here. :) I hate reinventing the wheel.
How to kill the processes accessing Internet in background using terminal commands.Command to stop (disconnect) the processes accessing Internet.Command to kill the process accessing Internet.
I thought 'killall' would work, but I need to provide the "command" to kill. I'm really looking for a command that will kill all processes that have a particular file/directory open. Currently, my script fails on an 'umount' because there are several processes that have this filesystem open. The command 'lsof' is a good tool to determine which processes have a filesystem open, but I don't really want to write a script that parses through the 'lsof' output to capture PSIDs. Is there a linux command that can kill all processes that may have a particular filesystem open?
I'm trying to avoid kill -9 for the reasons described in the Useless Use of Kill -9 form letter. Is this function sufficient, or do I need to kill the kill processes after a timeout or take care of other subtleties?
As an aside, what's a better name for this function? The current name reminds me of "Killing Me Softly", and manslaughter sounds a bit severe. Maybe spoon_kill (Google it)?
I wanted to know how to kill more than one process in unix?? e.g- I have problem statement as "Write a shell script to Kill all processes started from your login "
Today I run OpenOffice.org extensions update and it freezed fter showing me that everything was successful.When i xkilled it it refused tolaunch without any problem indication.killall soffice.bin didn't report "No process found" after 1,2,3...20 times.So I tried killall soffice.bin -i
My server pretty often becomes full up php processes running which are not needed.Is there a way to search for and kill any php process that is more than 3 hours old? as I understand it, i need to use ps piped with awk. awk at the moment seems very complicated to me, do not how to start tackling it.
I am doing real time programming in C++, under Linux. I have two processes, let me say A and B. A process is being started periodically, every 5ms. B process is being started every 10ms. The process A is doing data processing. The process B is reading that data and displays it. I am confused about how to run periodically processes. The problem is that the period of process A should be as much as it is possible accurate (5ms). For the process B it isn't so important. I have created independent processes, each in one .cpp file, and I am starting them from bash file. Is that OK? I don't have to make child processes in order to have parallel processes?
how can I measure time of N processes and N threads and then compare this time to prove that threads are faster than processes. understanding C code, or also for some good way to measure time of N processes and N threads for C.
I have 2 completely different processes A and B (they do not have any relationship) suppose A opens a file with file descriptor 4 and B opens another file with file descriptor 5
Can process A use the fd 5 (i.e using the file object of processs B) by any means..if suppose i let the fd 4 of process A to point to the file object of process B(fd 5) will it work? is it safe?