Red Hat / Fedora :: Memory Utilization For Particular Process
Jul 15, 2009
Is there any command to get the memory utilization of a particular process in Linux?I tried with Top and /proc/pid/status commands but the results are not proper, the memory keeps on increasing.Can anyone tell other than Top and /proc/pid/status commands ?
How to get cpu utilization of particular process without using top command. I want programmatically C or C++.Top command source is large.I can't analyze where %cpu usage got.
I have a server running samba process and there are about 70 samba users connected at a time. The system has 4Gb of memory and it seems each samba process is utilizing only 3352Kb of memory. When I run the command pmap -d (pid of samba)
But when I run the top command, it results as below: Tasks: 163 total, 1 running, 162 sleeping, 0 stopped, 0 zombie Cpu(s): 0.9% us, 4.9% sy, 0.0% ni, 93.3% id, 0.8% wa, 0.2% hi, 0.0% si Mem: 3895444k total, 3163192k used, 732252k free, 352344k buffers Swap: 2097144k total, 208k used, 2096936k free, 2487636k cached
Why could the system be utilizing such high memory? By the way, the server is not running other processes. The samba version running in it is 3.0.33-0.17.
I have a Ubuntu 10.04 LTS and about 40 ThinClients running LSTP in a school's computer lab. Everytime the users start to use applications, the server's memory utilization increases, while the swap memory is quite free. I have 12GB of memory on that server.
I have a computer with 16GB of ram. At the moment, top shows all the RAM is taken, (NOT by cache), but the RAM used by the various processes is very far from 16GB.I have seen this problem several times, but I don't understand what is happening.My only remedy so far has been to reboot the machine.
I have had a fresh install of Ubuntu 9.10 and installed some software after that.Since third some, some process is eating half of my memory.I have checked processes running in system manager but everything is normal.Maximum is consumed by compiz which is about 26 mb, seems very normal.I did restarted my computer several times, and in the start for 5 mins, its fine after that again my cpu fans runs at very fast speed and my one cpu is used up 95 % (I have dual core).Please help me out, this invisible thing is driving me crazy.I am attaching my htop screen shot (sorted by cpu %), now the cpu is not used by completely but fan is still struggling hard and fast.
process kslowd00/kslowd001 it eats 60% of my cpu and 15% of my memory ... i can't kill it even as root.this process makes my computer slower than" IBM 704 running fortran programs "... I've faced this process using Fedora 14 and now with Scientific Linux 6(RHEL 6 recompiled) SL 5.5 didn't have this.
I want to know if i can increase the memory space allocated for a process manually while the process is running ,,,, and if it is possible how i can do this .
Assume someone bind a particular process to a particular CPU core(In multi core machine) by using sched_setaffinity() like functions. Then how we can get that process running core id and CPU core utilisation of that process on that running CPU core(Pragmatically or by a Linux command)?.
I have an Acer Aspire netbook with 1GB RAM and 1.6 GHz dual-core 32-bit x86 chips. The KPackageKit / yum / rpm chain is running too slow for me. In addition to the time required to download any new packages or updates, it seems to require at least one full minute of processing time to install each package, update, or bug fix, no matter how small. Another full minute is consumed for each package in "cleaning up."Running yum from the command line takes nearly the same amount of time.During this time, I cannot run any other applications without severe thrashing. It seems that a full gigabyte of memory is in use with some 100M swapped out to disk.
Is there any way to reduce the running time and memory requirement of the update process?While not updating or installing software, I do not normally run out of memory (i.e. begin thrashing) until I have about a dozen browser tabs open, or the like.
- I am running with Oracle UEL 6.0 (2.6.32-100.28.5.el6) because stock RHEL 6.0 (2.6.32-71.el6 ) has issues with the async I/O driver.
- The test is a high throughput performance benchmark running on Oracle 11gR1
- I am pumping a lot of disk I/O through the system while running with enough users to max out the 8 CPUs, which get to 99+% utilization with RHEL 5.3
- The server is a 4-socket Nehalem EX X7560. Right now, only 2 cores per socket are enabled.
- There are GBs of memory left over. The disk response time of the SSD arrays is around 1ms and the arrays are capable of 4-5 times more IOPS. Same with networking, etc. The testbed is capable of maxing out 32 CPUs with RHEL 5.3 I cannot push the RHEL 6.0 CPU utilization past 95-96%. The same test on all flavors of RHEL from 4.4 to 5.3 can totally saturate the CPUs. It feels like the system is intentionally holding back some CPU cycles.
I have been assigned a school project on detecting memory leaks in linux processes. I am reading.. but have found it hard and inefficient to go through the very vast documentation not knowing what to really look for. Could you please give me some guidelines on this subject?
I need to know, what process for what purpose is using memory in my machine. ps utility with various options seems to give not exactly what i want, i.e. if i sum all the values like RSS, VSZ or some other values related to memory usage, the sum is not equal (even approximately) to what i get using free|grep "buffers/cache". How can i get this information? Even better, i would like to see contribution of every process, ramdisk, etc. to memory usage.
Need explanation about low level (like assembly level) memory management? Such as, how does a process acquire more memory, sharing memory among processes, etc. I don't want to know how to use malloc or other library functions, but more along the lines of how an example malloc implementation would acquire memory.
In linux, how can I display memory usage of each process if i do a 'ps -ef'?I would like to the 'virtual memory', 'res memory', 'shared memory' of each progress. I can get that via 'top', but I want the same info in 'ps -ef ' so that I can pipe the output to 'grep {my process name}'.
I am running a series of tests for an implementation of a remote pager that sends page faults to other computers in a network. Long story short, I was wondering if there is an easy way to force a process to use virtual memory as oppose to physical RAM so that I can better measure the performance of my implementation against how the system would perform while swapping to the hard drive.
Im using SUSE, i have 31GB of memory Mem: 31908592k total, 31429632k used, 478960k free, 12176k buffers. How do I find out what process are eating up all my memory.
Does anyone know of a linux utility which will prevent all memory in a forked process from being swapped out to disk? I've seen the 'mlockall' call, but hacking the app sounds like overkill.My reason for needing this is that I'm running Windows XP under VirtualBox on my linux netbook, and I'm concerned there are basically two levels of swapping going on, which on a single dinky netbook hard disk isn't
I understand that in linux virtual memory would be the same as swap and I also understand that linux only uses swap when your computer has used all your pc memory. I hope both assumption are right.Can I make a process like firefox use ONLY virtual memory/swap? No access to RAM.
What originally seemed like an easy thing to calculate has given me a big headache. Perhaps someone can help me with my issue. I am trying to find, in particular, how much memory certain application processes are taking. The process always is the same name, main_server, but with an argument to tell it what to do when running as a daemon.
When running the following command against all "main_server" processes, it produces a result in megabytes based on the output of the rss field in 'ps'.
Code: CALC=0 for ea in `ps -e -orss=,args= | sort -b -k1,1n | pr -TW$COLUMNS | grep main_server | grep -v "grep main_server" |
[Code]...
Currently I am left scratching my head. For capacity planning purposes, it would be nice to know how many more 'main_server' processes could run on the system without causing it to swap. Knowing the buffer and cache usage will go down as running processes demand more memory, I prefer to look at the free memory excluding cache and buffers. However, since 'ps' is reporting the processes are using more memory than free reports is in use without those things, I have no way to know how many more processes the system can support. I played around with different fields in 'ps', such as vsize, size, etc, but with no luck in matching up any numbers.
I use linux and Unix and I want to monitor the memory usage for process. To prevent memory leakage and out of memory of the system. Any command or sytnax , have more better and presentable data than below command about memory usage of one process ?
How can I periodically monitor memory usage of a process in linux.Can it be dumped in some file.So that later I can see what was the process behaviour in taking memory.
I have been looking for a method for a while now that would allow me to access another process's memory without causing it to freeze. But with all of my googling I have found nothing
so, my question is: Is there a way to not lock a process while accessing it's "/proc/[PID]/mem" interface?