I have some domains on a VPS server. Typical account memory usage for all domains runs at 50% of available, but I have a problem. One domain is causing me trouble because intermittently traffic will spike on that domain, causing so many requests within 1 min that I exceed my memory allocation for my entire VPS package. Apache is then killed but the virtualization software and Apache must then be restarted.
A sample snippet from tops right before the sever went down would like like this:
All of that memory usage adds up. I would like to "throttle" the number of processes that user/domain can run. I think this would be a quick and easy way to keep the domain from taking down my entire VPS. My understanding is that I could do this with the /etc/security/limits.conf file.
Is that correct?
I have never done this before. Do I want to set a hard or soft limit? I think if I wanted to limit the number of processes for "coldclim" to 15 I would add a line to limits.conf like this:
Code:
Assuming that is correct, can anyone tell me how the website would respond once it reached its limit? Would visitor queries become sluggish, or would the website not come up for them at all?
I have a VPS server with 512 MB memory. The php.ini is set so script memory limit = 16 MB. However, I have noticed in my top report, instances like the following:
The bold number of 6.4 is the % of sever memory this process is using. 6.4 % of 512 MB of memory is about 32 MB of memory, so it appears that this isn't being limited by php.ini. Am I correct? This leads to the next question: Is there some way to limit the amount of memory a single suphp process can use? (Basically, something like the setting in php.ini which limits suphp processes in the same way.)
I have a problem with the permission of the directories under /proc, they are readable and accessible only by Owner (they have permission 500 instead of the usual 555) As a consequence, the processes are visible only to the Owners or to Root. For exampleif I want to check if there is mysql
I see it only with the user mysql or with root because the directory has permission 500
This problem obstacles the functioning of some applications that should check the existence of some processes managed by other users. At the beginning all was working well. But after a while the problem appeared and I dont know which is the reason of it. how to restore the standard management of permissions of / proc?]
I ran into a user today that indicated that their company only allows them to log in through a terminal session once (no multiple logins). On second try their login window terminates. They are using putty.Is this being accomplished through PAM or sshd ( or some other method)?
recently i rent a xen vps intended to setup a PPTPD vpn server for me and my friends. so we can by-pass the great firewall in china and get back on ....., facebook and stuff. i have already setup the server and i can connect to it without any problem. but i still want to do some further configuration the server:
1. i want to limit the bandwidth to 400k/s per connection. 2. i also want to limit the max connection per user a/c
i have some thoughts on the 2nd requirement. in the user configuration file of /etc/ppp/chap-secret, you can specify the range of ip the user can get, does it limit the max connection per user a/c? or they can connect anyway, just every now and then a box pop up says conflict in IP address?
Last weekend i have increased the open file size (ulimit -n) for the application user id i have update the limits.conf file with necessary inputs restarted the service and the server as well, when i check the ulimit value for the specific user by switching user from other user it shows the new value (10240) but if i login directly using the application id the ulimit value shows as 1024 which one is the default one.
At present we are using windows print server getting user name and authenticated from domain server. I need your suggestion to configure linux printer server and how to share the printer to users and how to limit the user in taking printouts.
I have a few multi-user servers in an academic laboratory. I am having a problem with some users maxing out the available RAM, causing such sever slowdowns the machine essentially crashes. My servers are Dell Power Edge's running Ubuntu 8.10 Server Edition (Not my choice). I would like to set a maximum limit on the amount of ram a user can utilize. This morning I experimented with setting limits via /etc/security/limits.conf and using ulimit. Neither of them prevented my test program, a simple infinite loop of mallocs, from crashing the server.
Im trying to limit the diskspace users on the system may consume, and i found quotas (im a total linux noob). But when i try to set it, no matter what i set it to the maximus is 2 GB. Now... i need quite a lot more than that. One user should be able to use 1900 GB and the other 600 GB. How can i fix this? Im using ubuntu server 10.04.
I've been looking for this feature for months and couldn't find a solution for this. Does anyone know how to create users and limit the user to a specified directory?
Is there any way to limit user name to characters so when I create an account and if the account name is more than 8 characters long, it will not allow.
I am studying for the LPIC-1 exam, and reading a book that they recommend: "Introduction to Linux: A Hands-on Guide", by Machtelt Garrels. There's one question on the 4th chapter (Processes), that I found confusing: Question: Based on process entries in /proc, owned by your UID, how would you work to find out which processes these actually represent?
What does he mean? If I run the command (considering that my username is sl33p): Code: $ps -u sl33p ...gives me the right answer?
The ps man page says: -u userlist Select by effective user ID (EUID) or name.
This selects the processes whose effective user name or ID is in userlist. The effective user ID describes the user whose file access permissions are used by the process (see geteuid(2)). Identical to U and --user.
user@host$ killall -9 -u user Will it definitely kill all processes owned by user (including forkbombs)?
No new processes is spawned to user from other users. No user's processes are in D-sleep and unkillable.No processes are trying to detect and ptrace or terminate this started killall (but they can ptrace or do other things with each other) There is ulimit that prevents too much processes (but killall is already started and allocated it's memory)
E.g. if killall will finish untampered and successfully is it 100% that no processes are left with this uid? If no, how to do it properly (with standard commands and no root access). Will SysRq+I definitely kill all things (even replicating)?
All the kill idle user processes scripts I've seen don't take into account that the user might have multiple sessions open. Such is the case with one of our clients. Currently, every hour or two I need to do the following:
This will get the TTY and idle time for all users.
For each idle time over a half hour, I do the following (TTY is the TTY from the previous command with a space.
I then kill those processes.
There must be a way to do this automatically in a bash or perl script. I've tried both, but can't seem to get things to work properly.
i have a linux server which users connect to with SSH. my users only upload and download content from their /home folder.
Basicly, I want them to be limited to see and use only their home folder.
I read that it might not be a good idea to do so, since they nead read premissions to run programs and scripts, but again: they are only downloadinguploading content to their home dir.
Im new to linux and would like help or to be taught. My question is how do i limit users to their own directory for an example User andrew /home/andrew cant acess root or usr
Is it possible to limit each user so that only one can connect via each username for ssh/sftp? I work with a small company where there aren't really enough of us to justify using a revision control system, but we don't want to accidentally step on each other's toes, so we'd like to try simply preventing more than one person from accessing a given domain at once.
I would like to give a non-root user (nicollet) the ability to detect and send a signal to processes started by Apache2 (those processes are FastCGI scripts and the signal tells them to empty their cache). The processes are owned by the web user (www-data), and I'm running on Debian unstable.
I can't find any way to have the nicollet user see those processes.
The processes are running and can see by both root and www-data:
The most surprising is that the grep process is indeed run by www-data (because it's started from a setuid executable) and is visible, but the baryton process isn't.
What's going on here? Why can ps run by www-data show those processes, but ps run by a setuid executable running as www-data cannot, when it's started by nicollet?
is there any possible way to hide currently running processes from an user? This means I do not want him to know about what programs/processes does any other user but him run. In short words if that user runs 'ps -aux' he should get only his processes.
Few days ago, the server did not respond to a ssh request from a user at night. A user tried to check what went wrong with computer and tried to login from terminal next morning. As the computer was unresponsive, he somehow decided to boot it by turning the power off. To make the story short, the server rebooted; however, he can't login to his account. Actually, the server could not start some processes; but was able to ask user to enter his account username. Even though, he enters the correct username and password, server does not accept the request. I also could not login as root.
I just checked the server logs by booting it in single user mode. Here are some interesting lines:
Before the reboot:
irqbalance : can't balance irqs on a uniprocessor system: failed
After the reboot:
irqbalance : can't balance irqs on a uniprocessor system: failed fsck: fsck /: (this is repeated 900+ times)
Few days ago, the server did not respond to a ssh request from a user at night. A user tried to check what went wrong with computer and tried to login from terminal next morning. As the computer was unresponsive, he somehow decided to boot it by turning the power off. To make the story short, the server rebooted; however, he can't login to his account. Actually, the server could not start some processes; but was able to ask user to enter his account username. Even though, he enters the correct username and password, server does not accept the request. I also could not login as root.
I just checked the server logs by booting it in single user mode. Here are some interesting lines:
Before the reboot: irqbalance : can't balance irqs on a uniprocessor system: failed
After the reboot: irqbalance : can't balance irqs on a uniprocessor system: failed
Normally all I/O goes through the kernel so that it can schedule the operations and prevent processes from stepping on each other. A few special user processes are allowed to slide around the kernel, usually by being given direct access to I/O ports. X servers are the most common example of this isn't it ? give examples for any other processes that are allowed to slide around the kernel ?
Limit every user to his own home folder only.I have a web server running 10.04 LTS and as a newbie in the world of server administration, I'm in a bind.Right now, I have three users. Root, which obviously has access to everything, and two other users that each own a website.For these two users, their website is located in their respective home folder in an extra folder they each have Read, Write & Execute permissions on. This is the only folder they can write to. They cannot delete it, or change anything outside the folder.
So far so good, except that by default, they can also read any file in the system, meaning they can navigate to my other websites' folders and read, for instance, the database passwords from WordPress config files.This is obviously problematic.The users access their files and folders through SSH with FileZilla.
How can I prevent these users from reading sensitive data, i.e. how can I restrict their access to only their home folder?The users must continue to login through SSH with FileZilla (i.e. no FTP solutions)Apache must still be able to access the user's folders (i.e. cannot chmod to 750)Folder containing the command line tools (/bin/bash I think) will probably have to be symlinked in the user's home folder?
I am newbie for Linux I want to script for bandwidth Limite per user MAC based also how can i add user MAC and where to add? I want per user 16Kbps each user.
I am a user of a cluster. I don't want root to see/copy files from my user account(obviously). Is that possible to limit the access of root to users account?