All the kill idle user processes scripts I've seen don't take into account that the user might have multiple sessions open. Such is the case with one of our clients. Currently, every hour or two I need to do the following:
This will get the TTY and idle time for all users.
For each idle time over a half hour, I do the following (TTY is the TTY from the previous command with a space.
I then kill those processes.
There must be a way to do this automatically in a bash or perl script. I've tried both, but can't seem to get things to work properly.
that would show me at least any active ftp connects started with the ftp command, right? Is there then a way to use that to somehow kill any stuck sessions that are older than an hour?
I am developing a daemon that is acting up and I am now unable to create any new processes (ie. I cannot start a new process to kill the other rogue processes). So, I need to be able to kill the processes from a remote machine. How do I do "kill" remotely without admin privileges? If I cannot kill my own process from a remote machine as a normal user then tell me so I can mark it as the correct answer.
I've run into what is apparently an age-old SSH problem, which is that killing an ssh client process does not kill the remote process (unlike e.g. rsh). There seem to be lots of patches and a couple of open bugs on this topic that have been there for about 10 years or so... Having convinced myself by googling that there is no easy solution, I'm now looking for a workaround of some sort. I'm writing a testing framework so the processes I'm running remotely could be anything at all, i.e. I only have control of the client side. Also the remote processes are of course highly unstable and I need to be able to terminate them if they hang. ssh -t won't work for me as I don't necessarily have a terminal. Finding the remote process ID would be enough so I can do ssh <machine> kill <pid>, but I don't see any way to do that either. Just using ps, pgrep etc seems to suffer from not being able to uniquely identify the correct process, and killing the wrong process is of course very bad.
i was referring to an article given in following website.[URL] I was surprise to know that i can kill all running processes by using kill 0. However when i tried running the command nothing happened.
my machine details:
Code:
# lsb_release -a LSB Version: :core-3.1-ia32:core-3.1-noarch:graphics-3.1-ia32:graphics-3.1-noarch Distributor ID: EnterpriseEnterpriseServer Description: Enterprise Linux Enterprise Linux Server release 5.2 (Carthage) Release: 5.2 Codename: Carthage
I'm attempting to use 'killall' to kill all mysql processes, however after using the command mysql processes are still alive. 'killall mysql' says no processes were killed, and while 'killall mysql_safe' gives no message there are still mysql processes alive afterwards.
Code:
# killall mysql mysql: no process killed # killall mysqld_safe
I don't know about your computer but when mine is working properly no process is sucking 95%+ over time. I would like to have some failsafe that kills any processes behaving like that. This comes to mind because when I woke up this morning my laptop had been crunching all night long on a stray chromium child process.
This can probably be done as a cron job, but before I make it a full time job creating something like this I'd thought I should check here. :) I hate reinventing the wheel.
How to kill the processes accessing Internet in background using terminal commands.Command to stop (disconnect) the processes accessing Internet.Command to kill the process accessing Internet.
I thought 'killall' would work, but I need to provide the "command" to kill. I'm really looking for a command that will kill all processes that have a particular file/directory open. Currently, my script fails on an 'umount' because there are several processes that have this filesystem open. The command 'lsof' is a good tool to determine which processes have a filesystem open, but I don't really want to write a script that parses through the 'lsof' output to capture PSIDs. Is there a linux command that can kill all processes that may have a particular filesystem open?
I have an issue on one of my servers whereby the [normally very helpful] du and tar programs are somehow using up too much or my system resources (du 40% mem, tar 20% mem) and causing problems. I am after a command which is able to kill a process without knowledge of a PID but by process name e.g. "du" and memory usage e.g. >= 10%.
Something along the lines of: kill $(pgrep du) grep %MEM > 10
Although I know that is invalid syntax I cannot fathom the correct/best way to achieve this end!
I need a way to kill off the pids 2819, 2820 because they do not have a process tied to them like pids 2918, 2922 and 6657. The way it works is peek shell (pid 2918)is opened then it starts a ksh (pid 2922) session then from there the end user runs a command (pid 6657).
I'm trying to avoid kill -9 for the reasons described in the Useless Use of Kill -9 form letter. Is this function sufficient, or do I need to kill the kill processes after a timeout or take care of other subtleties?
As an aside, what's a better name for this function? The current name reminds me of "Killing Me Softly", and manslaughter sounds a bit severe. Maybe spoon_kill (Google it)?
I'm working with Eclipse and it's starting to misbehave now and then which completely freezes my computer. Is there any emergency command to kill such a misbehaving process so I don't have to reboot my computer?
I already have a emergency xkill icon in my taskbar and a [Ctrl]+[F1] console with "> sudo killall eclipse" pretyped(!) but sometimes it's even to late for this. What I would need is a emergency command/console that gets a guaranteed amount of process time so I can kill these process.
which in theory should pull *only* the PID of "mono user1.exe" and kill only that. The problem: It kills any and every single instance of mono that is running on my system, every userx.exe thats open. I am confused, as a simple "ps aux | grep 'mono user1.exe'" does only return the mono user1.exe process and not the others. "ps aux | grep 'mono'" returns them all though. how I can modify that script so that it only kills the specific process? Would "pkill -9 -f 'mono MCuser1.exe'" work as well - or would it too kill every instance of mono? I cant do a lot more of trial and error, its not good I am killing those instances accidently...
I've three user in my machine ,and i want to make sure that the process created by the user1 can be killed by other user and vice-versa ,is there any way i can do that without using root password or sudo
I am studying for the LPIC-1 exam, and reading a book that they recommend: "Introduction to Linux: A Hands-on Guide", by Machtelt Garrels. There's one question on the 4th chapter (Processes), that I found confusing: Question: Based on process entries in /proc, owned by your UID, how would you work to find out which processes these actually represent?
What does he mean? If I run the command (considering that my username is sl33p): Code: $ps -u sl33p ...gives me the right answer?
The ps man page says: -u userlist Select by effective user ID (EUID) or name.
This selects the processes whose effective user name or ID is in userlist. The effective user ID describes the user whose file access permissions are used by the process (see geteuid(2)). Identical to U and --user.
user@host$ killall -9 -u user Will it definitely kill all processes owned by user (including forkbombs)?
No new processes is spawned to user from other users. No user's processes are in D-sleep and unkillable.No processes are trying to detect and ptrace or terminate this started killall (but they can ptrace or do other things with each other) There is ulimit that prevents too much processes (but killall is already started and allocated it's memory)
E.g. if killall will finish untampered and successfully is it 100% that no processes are left with this uid? If no, how to do it properly (with standard commands and no root access). Will SysRq+I definitely kill all things (even replicating)?
I need to kill a process which has been started by user2 if I am user1 without being sudoers or using root.Do you know if there is a way of setting that when launching the process? Such as a list of users allowed to kill the process?
I just downloaded Wubi - To install Ubuntu on my computer. (Whether it's called Dual boot or running Ubuntu inside of Windows Vista I do not know.) Everything went fine, got a very easy little screen to ask what partition I wanted to install it on and how much size, etcetera, the download went fine, blablabla.
Problem: It asks me to reboot (This is the first reboot I got.), I say yes, it reboots, I get the booting screen where I get to pick between Windows and Ubuntu, I pick Ubuntu, and it gives me this 'error'.
Among other things it reads:
After this nothing happens - I waited about 25 minutes.
I manually turned off my computer - started it up again - once again chose Ubuntu and the same problem occured.
1. I looked around the Ubuntu folder inside my C a little, it shows the iso file as Ubuntu-10.04.1-desktop-amd64
2. Some other forum advised people with this problem to check their RAM, so I unplugged both, cleaned and switched them. (I have 2 GB ram total.) This didn't work. Same problem still occurs.
I would like to give a non-root user (nicollet) the ability to detect and send a signal to processes started by Apache2 (those processes are FastCGI scripts and the signal tells them to empty their cache). The processes are owned by the web user (www-data), and I'm running on Debian unstable.
I can't find any way to have the nicollet user see those processes.
The processes are running and can see by both root and www-data:
The most surprising is that the grep process is indeed run by www-data (because it's started from a setuid executable) and is visible, but the baryton process isn't.
What's going on here? Why can ps run by www-data show those processes, but ps run by a setuid executable running as www-data cannot, when it's started by nicollet?
Few days ago, the server did not respond to a ssh request from a user at night. A user tried to check what went wrong with computer and tried to login from terminal next morning. As the computer was unresponsive, he somehow decided to boot it by turning the power off. To make the story short, the server rebooted; however, he can't login to his account. Actually, the server could not start some processes; but was able to ask user to enter his account username. Even though, he enters the correct username and password, server does not accept the request. I also could not login as root.
I just checked the server logs by booting it in single user mode. Here are some interesting lines:
Before the reboot:
irqbalance : can't balance irqs on a uniprocessor system: failed
After the reboot:
irqbalance : can't balance irqs on a uniprocessor system: failed fsck: fsck /: (this is repeated 900+ times)
Normally all I/O goes through the kernel so that it can schedule the operations and prevent processes from stepping on each other. A few special user processes are allowed to slide around the kernel, usually by being given direct access to I/O ports. X servers are the most common example of this isn't it ? give examples for any other processes that are allowed to slide around the kernel ?
I'm trying to get the end result to have the same format as this as well:
1 bin 2 daemon 67 erozner
[code]....
Where the numbers are the number of processes being run by the user (the name right next to it).if I input the command egrep myFile into the terminal, it should look for every line with the letter x in myFile, right?