Server :: With PS Cannot Control The Processes Of Another User
Feb 15, 2011
I have a problem with the permission of the directories under /proc, they are readable and accessible only by Owner (they have permission 500 instead of the usual 555) As a consequence, the processes are visible only to the Owners or to Root. For exampleif I want to check if there is mysql
I see it only with the user mysql or with root because the directory has permission 500
This problem obstacles the functioning of some applications that should check the existence of some processes managed by other users. At the beginning all was working well. But after a while the problem appeared and I dont know which is the reason of it. how to restore the standard management of permissions of / proc?]
I have some domains on a VPS server. Typical account memory usage for all domains runs at 50% of available, but I have a problem. One domain is causing me trouble because intermittently traffic will spike on that domain, causing so many requests within 1 min that I exceed my memory allocation for my entire VPS package. Apache is then killed but the virtualization software and Apache must then be restarted.
A sample snippet from tops right before the sever went down would like like this:
All of that memory usage adds up. I would like to "throttle" the number of processes that user/domain can run. I think this would be a quick and easy way to keep the domain from taking down my entire VPS. My understanding is that I could do this with the /etc/security/limits.conf file.
Is that correct?
I have never done this before. Do I want to set a hard or soft limit? I think if I wanted to limit the number of processes for "coldclim" to 15 I would add a line to limits.conf like this:
Code:
Assuming that is correct, can anyone tell me how the website would respond once it reached its limit? Would visitor queries become sluggish, or would the website not come up for them at all?
Am using Suse 10.2 for internet and e-mail server. currently all my users have access to the internet if they know how to setup their web browsers. how do i deny some users internet access so that a user can only access his/her e-mail but not internet.
I'm using squid for proxy server in FC6. I'm also using squidGuard for web-site access restriction. I want to do some exception now for website access. My network ip block is 192.168.7.0/24 and facebook.com is restricted for all with squidGuard, but i want to allow facebook.com only in 192.168.7.51/32.
I've a mail server(Postfix) running on Slackware linux 12.1 . I need to configure a control panel so that one can create/delete/modify an email account as well as manage email alias.
I have configured squid proxy on centos 5.5 and some of my squid.conf file has following lines
Code:
http_access allow ncsa_users office
There are 3 users called "user034, user035 and user050" in the /etc/squid/squid_passwd file need to restricted access to internet except sites www.abc.com form anywhere in the lan. Once they logged in any ip, rule should apply.(that means no ip related acl, only user name related) How can I configure this in squid.
Title sums up my problem. Im running so many processes in Slackware, running KDE. I dont even run that many programs, and already its more than XP has (by a **** load). What is wrong here, and how do i kill a lot of the processes to cut down on my cpu usage by tons and cut down memory usage while still keeping everything the same?
Heres a picture of my system monitor - img651 DOT imageshack DOT us/img651/5994/systemmonitorz.png
I didnt put image tags because its a fullscreen.
The memory rises over time, when I restarted my computer it was up to 500 - 600 mbps. At the minute after its at 360.
I have a user account which is required to run as part of the operating system and as a service. I am currently attempting to install my companies software on an Ubuntu desktop via wine just for the purpose of finding out if it's do-able.
Is there a way, in Ubuntu, for a user account to be given the local rights assignment to act as part of the operating system and to function as a service in the background?
I am studying for the LPIC-1 exam, and reading a book that they recommend: "Introduction to Linux: A Hands-on Guide", by Machtelt Garrels. There's one question on the 4th chapter (Processes), that I found confusing: Question: Based on process entries in /proc, owned by your UID, how would you work to find out which processes these actually represent?
What does he mean? If I run the command (considering that my username is sl33p): Code: $ps -u sl33p ...gives me the right answer?
The ps man page says: -u userlist Select by effective user ID (EUID) or name.
This selects the processes whose effective user name or ID is in userlist. The effective user ID describes the user whose file access permissions are used by the process (see geteuid(2)). Identical to U and --user.
user@host$ killall -9 -u user Will it definitely kill all processes owned by user (including forkbombs)?
No new processes is spawned to user from other users. No user's processes are in D-sleep and unkillable.No processes are trying to detect and ptrace or terminate this started killall (but they can ptrace or do other things with each other) There is ulimit that prevents too much processes (but killall is already started and allocated it's memory)
E.g. if killall will finish untampered and successfully is it 100% that no processes are left with this uid? If no, how to do it properly (with standard commands and no root access). Will SysRq+I definitely kill all things (even replicating)?
All the kill idle user processes scripts I've seen don't take into account that the user might have multiple sessions open. Such is the case with one of our clients. Currently, every hour or two I need to do the following:
This will get the TTY and idle time for all users.
For each idle time over a half hour, I do the following (TTY is the TTY from the previous command with a space.
I then kill those processes.
There must be a way to do this automatically in a bash or perl script. I've tried both, but can't seem to get things to work properly.
I would like to give a non-root user (nicollet) the ability to detect and send a signal to processes started by Apache2 (those processes are FastCGI scripts and the signal tells them to empty their cache). The processes are owned by the web user (www-data), and I'm running on Debian unstable.
I can't find any way to have the nicollet user see those processes.
The processes are running and can see by both root and www-data:
The most surprising is that the grep process is indeed run by www-data (because it's started from a setuid executable) and is visible, but the baryton process isn't.
What's going on here? Why can ps run by www-data show those processes, but ps run by a setuid executable running as www-data cannot, when it's started by nicollet?
is there any possible way to hide currently running processes from an user? This means I do not want him to know about what programs/processes does any other user but him run. In short words if that user runs 'ps -aux' he should get only his processes.
Few days ago, the server did not respond to a ssh request from a user at night. A user tried to check what went wrong with computer and tried to login from terminal next morning. As the computer was unresponsive, he somehow decided to boot it by turning the power off. To make the story short, the server rebooted; however, he can't login to his account. Actually, the server could not start some processes; but was able to ask user to enter his account username. Even though, he enters the correct username and password, server does not accept the request. I also could not login as root.
I just checked the server logs by booting it in single user mode. Here are some interesting lines:
Before the reboot:
irqbalance : can't balance irqs on a uniprocessor system: failed
After the reboot:
irqbalance : can't balance irqs on a uniprocessor system: failed fsck: fsck /: (this is repeated 900+ times)
Few days ago, the server did not respond to a ssh request from a user at night. A user tried to check what went wrong with computer and tried to login from terminal next morning. As the computer was unresponsive, he somehow decided to boot it by turning the power off. To make the story short, the server rebooted; however, he can't login to his account. Actually, the server could not start some processes; but was able to ask user to enter his account username. Even though, he enters the correct username and password, server does not accept the request. I also could not login as root.
I just checked the server logs by booting it in single user mode. Here are some interesting lines:
Before the reboot: irqbalance : can't balance irqs on a uniprocessor system: failed
After the reboot: irqbalance : can't balance irqs on a uniprocessor system: failed
Normally all I/O goes through the kernel so that it can schedule the operations and prevent processes from stepping on each other. A few special user processes are allowed to slide around the kernel, usually by being given direct access to I/O ports. X servers are the most common example of this isn't it ? give examples for any other processes that are allowed to slide around the kernel ?
this is scary, bunch of vmware-user-wra processes stall cpu 100%!! What's going on? Server has just been restarted! Bere I restarted, the root started all this vmware-user-wra!! I was configuring vncserver! After restart, it's started by user roo300 which I have used to login via SecureShell!
I'm trying to get the end result to have the same format as this as well:
1 bin 2 daemon 67 erozner
[code]....
Where the numbers are the number of processes being run by the user (the name right next to it).if I input the command egrep myFile into the terminal, it should look for every line with the letter x in myFile, right?
I have what I think is a somewhat different failure of standby than I've seen listed on other threads, and I'm stumped.The system hangs on this for a while, then comes back to the login screen without going into standby. This ONLY HAPPENS on a SECOND standby attempt--the first standby after booting ALWAYS succeeds.The standby log doesn't indicate any failures.I had made other changes previously that temporarily got standby working consistently:/etc/default/grub: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash usbcore.autosuspend=-1"
the problem i have is that when i enter my username, the output (my real name) does not display in the output.txt. instead it displays in putty. so when i run my script in putty it shows the message to enter username and after i enter my username my real name appears below it. i want it to show in the output.txt
As a root I have added a new user via useradd method and set a password. The problem now when I logged remotely via putty using the new user credential I am able to browse the root folders etc. So how to limit the user just to limited folders only such home folder and maybe one or two other folders.
I have fedora 13 64bit installed on the box I want to connect with vnc, I have tigervnc & tigervnc-server installed. As my user I run: vncserver then I can connect to that box using: 192.168.1.2:1 but it is a different desktop? How can I use vnc to control my user's desktop?
I havent worked on a linux system in about 6 years so Im a little rusty and wasnt that great 6 years ago. Im trying to create a user that can only upload to the server. I have picked at several post tutorial and such but its still not working. Currently you can still upload and download even though you should only be able to upload. Im sure Im missing something but have no clue what
In the past I have used Slackware, but I changed to Gentoo because Slackware don't have a nice package manager. I like Gentoo very much, because you have control over your installation from the very beginning. You can tweak your installation, before it's a running installation.
The main problem is that you need to compile everything from source (this has some benefits), but i haven a 3.0ghz pentium 4, so compiling a full system takes days. That's not I want.
Because of that I tried Debian, it's nice because of the large collection of precompiled packages. However, it's seems like it is less Stable then Gentoo (i run debian sid, because i need the newer applications). It's also harder to configure to my needs.
Is there a (Gentoo-based) distribution that gives the user a lot of control on the installation proces (like gentoo) and afterwards gentoo-like configuration (it was so much better then Debian, for example: Where is the xorgconfig utility in debian? I know there is something called dpkg-reconfigure to configure X, but that didn't asked me about videocard and monitor settings, only keyboard related options)
But the distribution must have binaries for all (or at least a lot of) programs.
Is there such a distribution? What does come closest to my needs?
I have a shared folder that has it's access restricted to certain users on a file server at work. Currently when I try to add a user I follow this process: Right click the Folder, go to Properties Click the "Access Control List" tab Select a user from the "Participants List" Click Add
For most users this process works fine but with one of them I get the following error: "Could not add ACL entry: Invalid Argument" I also tried a script that a former employee created which seems to employ this command: setfacl -m u:<USER>:rw- <PATH> Running the command with the correct user and path returns a similar "Invalid Argument" error. We're using OpenSuse 10.2.
For the past few days I was putting effort on understanding the software control flow starting from "Boot loader" to "Linux User space".
I am consolidating the entire process and putting forth in this forum...It would be great if someone can validate this..It might be useful to other new bees too.
Step 1 : Power up the board
Step 2 : The CPU control goes to EEPROM/storage memory where BIOS resides
Step 3: BIOS gets loaded in RAM and gets executed
Step 4: During execution, the selection of Boot device has to be done with the help of BIOS Menu [Blue screen appearance during start up in normal PC's]
Step 5: BIOS shall access the Bootloader stored in boot device [for eg.,Hard disk]. Boot loader is stored in MBR area.
for explanation purpose I take the following configurations
Bootloader = GRUB Boot Device = Hard Disk
Step 6: GRUB shall be loaded in RAM and gets executed
Step 7: GRUB shall load the KERNEL image to RAM. Kernel image is stored in Hard Disk.
The question of "How the GRUB knows where the Kernel image is stored".
The answer is 1. In the "Grub.config" file, the location of "Kernel Image" and " Ramdisk Image" [which will be discussed later in the section] is being given.
Step 8: Kernel Image followed by Ramdisk Image is loaded in RAM by GRUB bootloader
Step 9: Kernel Image gets executed...During execution, top portion of the code shall make initial hardware initialization and latter part of the code shall just decompress the Kernel Image
Step 10 : After decompressing the Kernel Image, it shall decompress the already loaded Ramdisk Image
Ram disk is just creating a temporary hard disk in RAM. The main responsibility includes it consists of minimal driver files, executables, directory structures to created a TEMPORARY ROOT FILE SYSTEM.
This Temporary Root File system shall be used by Kernel Image
1. Execute the executables to access the Hard disk 2. For creating Permanent Root File System in HARD DISK
Step 11 : Kernel Shall look for the file /Linuxrc in Ramdisk. Linuxrc is a USER script file [not sure]
Step 12: At the end of script file Linuxrc, the Ramdisk shall give the control to "USER SPACE" [path for writing the script not known]in Linux kernel