General :: Limit Any Process From Using More Than 500 MB Of RAM?
Jan 30, 2011
I would like to limit any process from using more than 500 MB of RAM. AFAIK this is done using RSS in /etc/security/limits.conf but the process called gnome-panel apparently is using 618436 kB of VmRSS. How can this be ?
I'm writing a shell script which aims to create a safe gtared (xxx.sql.gz) copy of MySQL databases.This script is planned to be Cron-Jobed.
Well, what I need to add to this shell, is something that limit CPU usage for the whole process (just in case if the database being generating is a huge one.)So, after few time of googling I found couple of solutions:
- Using cpulimit. I tried to place the code in Position(1) and Position(2) but it didn't seem to be working fine.. Any idea about the right use?
And the other Solution is:- Using nice.
Well, assuming I named my shell script (sqlbacker)..
Finally, this is my first time I ever write a shell, so correct me if somewhere I made a mistake :-) (The script itself works perfectly)
There's a disfunctional process eating copious cpu time.Is there a way to effectively assign it a high nice value? I need to do this whenever it runs, for whatever reason, and I can't be bothered to track down all the scripts and scenarios that cause it to run and change the script to use nice. And I can't be bothered to manually run renice whenever I notice that it's running.
I want the OS to automatically assign a high nice value to this process, perhaps based on the processes name. Is this possible?Presumably, a cron job could run every 5 minutes and run "renice" on every process matching a given name, but I'm hoping for a solution with more finess.
I've written a program for a class that my professor will be testing in various low memory environments to see how it behaves when the program runs out of memory. Is there a way I can simulate the execution in a low memory environment without creating a virtual machine?
I have already tried trickle and wondershaper. I need a program that can limit the speed of download/upload of an already running program. Similar to the how NetLimiter in Windows limits already running processes. Using Linux.
I am trying run audio conversion on my server that I want limited to a certain number of processes based on process name. I am using the following script but it isnt limiting the number of job like I want it to.
Code: #!/bin/bash $num_jobs = 13 while [ $(ps -A | grep -v grep | grep -c pacpl) -ge $num_jobs ] do sleep 1
sudo: pam_limits(sudo:session): wrong limit value 'unlimited' for limit type 'hard' Dec 28 22:42:29 yn54 sudo: pam_limits(sudo:session): wrong limit value 'unlimited' for limit type 'soft' Dec 28 22:42:29 yn54 sudo: pam_limits(sudo:session): wrong limit value 'unlimited' for limit type 'hard'
I have a VPS server with 512 MB memory. The php.ini is set so script memory limit = 16 MB. However, I have noticed in my top report, instances like the following:
The bold number of 6.4 is the % of sever memory this process is using. 6.4 % of 512 MB of memory is about 32 MB of memory, so it appears that this isn't being limited by php.ini. Am I correct? This leads to the next question: Is there some way to limit the amount of memory a single suphp process can use? (Basically, something like the setting in php.ini which limits suphp processes in the same way.)
I have a high priority service that I start with sudo nice -n -10 process. This process does not need superuser rights though, except for the priority elevation. But nice requires superuser privileges to elevate priority.
Description of what the code does or what i intended to do:
1. Created a child process from parent process using 'fork()'
2. Sent a signal 'SIGALRM' from child process to parent process using 'sigqueue' function.
(The Third parameter of 'siqueue' function contains the message (message msg) which the child process wants to send to the parent process.'msg' is a stucture instance containing a) pid of child and b) string) 5. Print the 'msg' sent by child process inside the signal handler function 'sig_action_function' of the parent process I am getting some junk value when this line is executed
Code:
printf("%d ",msg->cpid);
I expected to get the pid of child process, which the child process sent to parent process through the signal.
as we all know Process Scheduler does Process scheduling and its a process as well. I was just wondering that if this happens then the Process "Process Scheduler" should be a part of Process queue as well.
So if there are 5 process are there in Process queue & process scheduler is administrating them then since its also a process, once it puts a process under RUN state it should itself go inside queue because at one instant only one process can get executed on a processor. This is quite confusing for me. Please help me out. I tried to search on this but could not find any relevant topics.
I have a process running on Linux.When i do ps -eaf | grep <myProcess>, it show muliple entries for <myProcess> with different pids for each entry.Kindly tell me what could be the reason for a process having multiple pids?
I've been running my shellscript for about half an hour now. It's taking longer than I thought to process all the data. I have the process ID of it. Is it possible to save the process and log out then log in and continue the process? I know how to pause a process using kill -pause pID and continue it using kill -cont pID. But that only work if you don't log out after pausing it.
I want to kill parent process after "fork()" method. but if I kill parent process with "exit(0)" method, main() thread is terminated as well so child prosess doesn't work anymore. Is there any way to kill only parent process without affecting to child process?
I have a problem with both genisoimage and mkisofs. Both of them are limited to 8 characters. There are very many options for them. Which one would remedy the issue?
I currently use cp to backup data. I prefer it over rsync. I use the -b switch to make a backup of data and recently found you can use --backup=t to create numbered backups.Using --backup=t means that I could end up having 100 versions of a file if I change it 100 times. With the -b switch I will only ever have 2 versions. Is it possible to limit the numbered backups to 5 for example? So I would only ever have 5 versions?
i have centos os squid 2.6 version,i have to configured squid to restrict some ip to 10 kb upload for that i set request_max_body_max_size but this directive is applicable to all ips but i want to limit uploading for some paricular ips.
i have a linux server which users connect to with SSH. my users only upload and download content from their /home folder.
Basicly, I want them to be limited to see and use only their home folder.
I read that it might not be a good idea to do so, since they nead read premissions to run programs and scripts, but again: they are only downloadinguploading content to their home dir.
I've just found that one of my email addresses with an auto-reply gets stuck in a loop when it receives email from senders who also have an auto-reply set up.Is there an easy way for me to set Postfix so that it only sends one auto-reply email or so that it only sends a maximum of 1 auto-reply message per day to the sender?
Im new to linux and would like help or to be taught. My question is how do i limit users to their own directory for an example User andrew /home/andrew cant acess root or usr