Programming :: Possible To Know The Amount Of Memory Used By A Program Before Running It?
Mar 3, 2009
I would like to know if there is a linux command to verify the amount of memory used by a program. The programs I am using were compiled with gfortran.
I have a java program that runs on Debian as a background processor. Yesterday the Java program stopped running. I looked at the memory usage, the system only had 5MB memory left, so my guess is that the java program ran out of memory to use.
However, after we restarted the java program, we could see that the free memory count started to go up. It kept going up from 5MB to over 400MB. The increase of memory happened slowly, when I measured it, I could see that with each minute passing by, there were a bit more memory added into the free memory pool, and meanwhile, the java background process was running.
I wonder why this would ever happen. It's as if our java program first brought the machine done because it consumed all the memories, then after restart, it starts to give back memories.
I need to allocate a % of the total system memory for a buffer but what is the best method to determine how much memory is in the system? So far the only way I have found is to get the pages of memory:
Code: long sysconf(_SC_PHYS_PAGES) Is that the only option?
I am trying to write a script to calculate the total amount of installed memory to use during an anaconda kickscript, so the swap file is created at 2 x the installed memory. I so far have the amount of installed RAM DIMMS but need a way to total them up and produce a varible I can use in the pre section of the install.
Note: on some servers there could be from 1 DIMM up to 16 DIMMS installed so the script needs to be able to handle this. I also can not use bc as it does not exist during the install stage. I am guessing I need a while loop to do this and use expr but do not know where to start for this logic.
I am trying to run a simple perl program that requires getting stock price data from yahoo for just 1 ticker symbol, and it was running fine till this morning, wherein it froze and displayed the message: Out of memory!
I cleared my cache by running the following:
Code: $sync $sudo echo 1 |sudo tee /proc/sys/vm/drop_caches $sudo echo 2 |sudo tee /proc/sys/vm/drop_caches $sudo echo 3 |sudo tee /proc/sys/vm/drop_caches but it hasn't helped.
Even Firefox has been freezing, so I basically cannot do anything on my computer.
I went to an interview last week and there was this guy who asked a simple question that i have been trying to solve for a couple of days. I tried google but i just cant get the search keywords right. The result were just useless. Well, the question is : "How can we allocate a limited memory to a process before we start its execution" well, the question is related to an X11 system so may be some flags must be set to limit its memory.
I wrote a program in lcc in windows and I have to write it in gcc in unix. In lcc there was an option to use more memory than the default for the stack. The following code is working in lcc but in gcc it gives segmentation fault:
I wrote a multithread program(approx 1000 thread have to run) and each thread has to parse a file(for each thread there is one file, ex:thread1 has to parse file1 and thread2 has to parse file2 like this....). I wrote "parse" program as follows. It is working well, if i create 50 threads. but if i run more than 200 thraeds Im getting doublefree corruption as follows:
And some time I am getting parsing problem and error af follows:
Code:
powersetting.6607:1: parser error : Start tag expected, '<' not found (where powersetting.6607 is file name, when i check this file it is started with '<').
On a new box with Debian Lenny, I have 2x2 gig of ddr2 but the karamba applet show only 3291mb. And dmesg show this: Code: dmesg | grep Memory [0.004000] Memory: 3362976k/4194304k available (1769k kernel code, 43092k reserved, 752k data, 244k init, 2489792k highmem)
I need to monitor the amount of free physical memory on Linux from within a large C program. The sampling will occur very frequently, so the measurement cannot be performance intensive. The fact that Linux uses much of the theoretically free memory for cache and buffers means that just measuring the free pages is not sufficient. Using free + cache + buffers gives an overestimate as not all cache/buffers can be freed, but I could get a rough idea of how much generally can't and subtract that from the answer.
Possible options that I've come across so far are: Parsing /proc/meminfo - but that involves reading from file which is slow. Extracting the free, cache and buffers values from the output of the Free command - but is there a quick way to do this? Parsing the /proc/freemem file produced by the API here - but this is again reading from file. Is there a way to get that output directly? Speed is an extremely high priority, and the answer it must accurately represent the amount of memory that my program could expand into (to within a few Mb).
as you should see, top is indicating 3.544.864kb (3.5Gb) of memory used while gnome system monitor only 609Mb. What's wrong here? (I am pretty sure Gnome SM is right. Top is updating every sec.)
As per the above calculation 81% of memory is used.Is this correct? and if so Am I running out of memory?what is the limit in % that I should maintain for a better performance?
When I run ardour sound editing I get this message , but it starts ok Your system has a limit for maximum amount of locked memory! This might cause Ardour to run out of memory before your system runs out of memory. You can view the memory limit with 'ulimit -l', and it is normally controlled by /etc/security/limits.conf
bash-4.1$ ulimit -l 64 my limits.conf is like this audio - rtprio 99 @audio - memlock 250000
My problem is I installed Zone Minder for camera security and I'm testing it on my laptop with the built in webcam and everything seems to work perfectly except when I try to view the live feed from the camera, it's just a black box. No video.
I checked this website and it's exactly the problem I'm having with a fix for it but his fix doesn't work. He says to type:
Code: user@ubuntu:~$ sudo echo "256000000" > /proc/sys/kernel/shmmax user@ubuntu:~$ sudo service apache2 restart user@ubuntu:~$ sudo service zoneminder restart
I am trying to understand a large amount of allocated memory that seems not to be accounted for on my system.I'll say up front that I am discussing memory usage without cache and buffers, 'cause I know that misunderstanding comes up a lot.I am in a KDE 4.3 desktop (Kubuntu 9.10), using a number of java apps like Eclipse that tend to eat up a lot of memory.after a few days, even if I quit most apps, 1 gb of ram remains allocated (out of 2 gb).this appeared excessive, and I took the time to add up all values of the RES column in htop (for all users).the result was about 1/2 gb.am I trying to match the wrong values?or could some memory be allocated and not show up in the process list?this is the output of free
Code: total used free shared buffers cached Mem: 2055456 1940264 115192 0 123864 702900
I am trying to use perl to run a program using the eval command but the program runs infinitely, i just need it to run basically for one second, stop then give me the output. I tried using fork but it does not really work. The child process is not being killed.
my $pid = fork; if ($pid == 0) { my $results = `ngrep etc...`;
I have a script that calls another program/script, xxx, to run in the background. Supposedly this program at most should finish within five (5) minutes so after five (5) minutes, I run some other steps to run the script into completion. My problem is sometimes the program takes longer than five (5) minutes and this is causing problems when running the rest of the steps in the scripts. Can anyone suggest how to re-program my script. At the moment, the KSH script, i.e. test.ksh, is doing as below:
test.ksh: ..... ..... xxx/xxx.ksh <--- program/script called by the script sleep 300 ..... run the rest of the script ..... ..... problem is sometimes xxx/xxx.ksh takes longer than 300 seconds ..... ..... any way that I can monitor that xxx/xxx.ksh finishes before I run ..... ..... the rest of the scripts .....
I'm looking for a way to detect whether or not a program has been called from pipe, e.g.
Code:
whatever | my_program versus simply just being exectuated directly:
Code:
my_program
Why? In the first case, I want to run the program non-interactively, and in the latter case I want to print out user-friendly messages. I've been thinking along the lines of some check I haven't yet found, like:
The situation is that I have an MPI-parallel fortran program. I run it and it's distributed on N processors, and each of these processes must call an external program.
This external program is also an MPI program, however I want to run it in serial, on the processor that is calling it, as if it were part of the fortran program. The fortran program waits until the external program has completed, and then continues.
The problem is that this external program seems to run on any processor, and not necessarily the (now idle) processor that called it.
how I can call the program and ensure it runs only on this processor?
Extra information that might be helpful:
If I simply run the external program from the command line (ie, type "/path/myprogram.ex <enter>"), it runs fine. If I run it within the fortran program by calling it via
CALL SYSTEM("/path/myprogram.ex")
it doesn't run at all (doesn't even start) and everything crashes. I don't know why this is.
If I call it using mpiexec:
CALL SYSTEM("mpiexec -n 1 /path/myprogram.ex")
then it does work, but I get the problem that it can go on any node.
My requirement is to save files before shutting the Linux machine down, unattended ie: when the user is not near the machine. This is done whenever the UPS battery is about to die, so that the files get saved. open-Office/text-editor applications that can be saved using keystrokes will have to be found from the running processes and keystrokes should be sent to them from a C program that was started in non-graphics stage. ie: from a C program that forks into memory as a daemon before xwindows part starts. How to I send keystrokes to a running application? (like cotrol + F and then wait and then send next set of keystrokes till the file is saved as a new file or as the same file itself), either from C program or a script?
I have a few multi-user servers in an academic laboratory. I am having a problem with some users maxing out the available RAM, causing such sever slowdowns the machine essentially crashes. My servers are Dell Power Edge's running Ubuntu 8.10 Server Edition (Not my choice). I would like to set a maximum limit on the amount of ram a user can utilize. This morning I experimented with setting limits via /etc/security/limits.conf and using ulimit. Neither of them prevented my test program, a simple infinite loop of mallocs, from crashing the server.