General :: Top Used Memory Versus Ps Processes Memory
Jan 17, 2010
I found from command 'top' that 8GB memory are used. However, using command 'ps' with some options to grep the running processes and then summing up the memory used by the running processes are less than 2 GB. Where has the used memory gone ?
The following code is for monitoring the memory used by apache processes. But I got a problem that the data I got by this script is much larger than the physical memory. It was said that there are some libraries used simultaneously by many processes, so the data I got has some double counted part. Because apache has many httpd processes.
Anyone have an idea of getting the multi-processes memory used?
I am doing a test to get the memory used by apache`s apache2 processes. I used a script to get VmSize and VmRss from /proc/pid/status, and loop through that to get the sum of VmSize and VmRss of all the apache2 processes.
I found the VmSize (about 4GB) and VmRss (about 3.4GB) are much larger than the physical memory (1GB) when apache server was saturated. It was said because of the multi-counted libiraries size used by many processes simultaneously. Then , how to get the physical memory used by apache2 processes? Or how to get a more reasonable memory data?
I work on Linux for ARM processor for cable modem. There is a tool that I have written (as the job demands) that sends/storms customized UDP packets using raw sockets. I form the packet from scratch so that we have the flexibility to play with different options. This tool is mainly for stress testing routers.I actually have multiple interfaces created. Each interface will obtain IP addresses using DHCP. This is done in order to make the modem behave as virtual customer premises equipment (vcpe).
When the system comes up, I start those processes that are asked to. Every process that I start will continuously send packets. So process 0 will send packets using interface 0 and so on. Each of these processes that send packets would allow configuration (change in UDP parameters and other options at run time). Thats the reason I decide to have separate processes. I start these processes using fork and excec from the provisioning processes of the modem. The problem now is that each process takes up a lot of memory. Starting just 3 such processes, causes the system to crash and reboot.
I have tried the following:-
1-I have always assumed that pushing more code to the Shared Libraries will. when I tried moving many functions into shared library and keeping minimum code in the processes, it made no difference to my surprise.
2-I also removed all arrays and made them use the heap. However it made no difference. This maybe because the processes runs continuously and it makes no difference if it is stack or heap?
3-I suspect the process from I where I call the fork is huge and that is the reason for the processes that I make result being huge. I am not sure how else I could go about. say process A is huge -> I start process B by forking and excec. B inherits A's memory area. So now I do this -> A starts C which inturn starts B will also not help as C still inherits A?. I used vfork as an alternative which did not help either. I do wonder why.
I am facing an issue where the process starts hanging. When I closely look at the logs I come to know that some of the child processes that are forked by the parent process are not finished.
1) Is it possible that the child processes that are not finished occupy the socket memory of the parent process and ultimately a point is reached where no socket memory is available to fork new child processes.
2) What is the standard limit of socket memory in linux?
3) What is the fate of such child processes (as I have mentioned above)?
4) How to debug such cases so that the exact problematic area is identified?
I wrote a program that multiplies 2 matrices using multi-threads and another one using multiple processes and shared memory. Both in C.I need to find the total memory usage of these programs. I know of the top command, but when my matrices are relatively small they don't even show up on top because they complete so fast, how can I find the memory usage for these instances?Also, how can I find the total turnaround time of my programs
I am using malloc and frees a lot in my program. It shows its allocated but when i remove it doesnt show as the memory is removed(I am using the top command to view VIRT memory usage). If this continously grows what would happen to my program (Will it go out of memory?)
I have a computer with 16GB of ram. At the moment, top shows all the RAM is taken, (NOT by cache), but the RAM used by the various processes is very far from 16GB.I have seen this problem several times, but I don't understand what is happening.My only remedy so far has been to reboot the machine.
I recently built a new server, running 64 bit Slackware 13.0 with the following specs:
MSI 785GTM-E45 AMD Phenom II X2 550 2GB DDR2 Onboard video from AMD 785G chipset 2x 80GB IDE system drives using software RAID with 2GB swap partition
I only include these because I'm not convinced my problem is not hardware related at some level. Basically, when I first start up the system, the memory usage is anywhere from 60 to 200MB. Then it starts to gradually climb until there is only 12-15MB free. This can take anywhere from a few hours to a few days.
The only thing I really use this for is to serve Samba shares and the occasional SSH login. I've set up Samba shares to be accessed by my Windows machines as well as a Mac. Initially I just explored the network if I wanted to see the share from these machines, but that was too slow and unreliable (the server would not always show up in Windows), so now I automatically mount the share as a network drive at startup (from Windows). So I don't know if this would have anything to do with the steadily increasing memory usage. These systems are not on/connected all the time, but the memory usage seems to rise anyway.
When I run top, it reports that nearly all of the physical memory has been consumed (after a while of uptime), but none of the swap space has even been touched. This is a typical output of the first several lines, sorted by swap size:
I can't be sure, but I believe the message just repeated itself before this part. MS-7549 is one of the components of the mainboard (MSI), maybe the northbridge or something (it seems to be associated with more than one model), so I thought for sure this meant a hardware problem, maybe with the storage controller or memory. But Memtest86 passes for more than a day without any errors, and I loaded Windows on one of the disks and ran it as hard as I could with prime95 and nothing so much as flinched. So I've got to think this is some kind of issue with linux and/or how it is handling my hardware configuration.
I work on Linux for ARM processor for cable modem. There is a tool that I have written (as the job demands) that sends/storms customized UDP packets using raw sockets. I form the packet from scratch so that we have the flexibility to play with different options. This tool is mainly for stress testing routers.
The details are here.
I actually have multiple interfaces created. Each interface will obtain IP addresses using DHCP. This is done in order to make the modem behave as virtual customer premises equipment (vcpe).
When the system comes up, I start those processes that are asked to. Every process that I start will continuously send packets. So process 0 will send packets using interface 0 and so on. Each of these processes that send packets would allow configuration (change in UDP parameters and other options at run time). Thats the reason I decide to have separate processes.
I start these processes using fork and excec from the provisioning processes of the modem.
The problem now is that each process takes up a lot of memory. Starting just 3 such processes, causes the system to crash and reboot.
I have tried the following:- 1-I have always assumed that pushing more code to the Shared Libraries will help. So when I tried moving many functions into shared library and keeping minimum code in the processes, it made no difference to my surprise.
2-I also removed all arrays and made them use the heap. However it made no difference. This maybe because the processes runs continuously and it makes no difference if it is stack or heap?
3-I suspect the process from I where I call the fork is huge and that is the reason for the processes that I make result being huge. I am not sure how else I could go about. say process A is huge -> I start process B by forking and excec. B inherits A's memory area. So now I do this -> A starts C which inturn starts B will also not help as C still inherits A?. I used vfork as an alternative which did not help either. I do wonder why.
reduce the memory used by each independent child processes.
I'm running several SHOUTcast server instances and a WowzaMediaServer instance on a CentOS machine. I'm experiencing a memory leak problem, but I can't figure out which processes are eating memory.
TOP command reports as follows:
[Code]...
Something misterious to me (I'm still a Linux newbie) is that TOP reports a total of 7.5GB used ram but very small percentage for single process (0-1%). Memory consumption starts at 1GB/8GB after reboot and in three days running gradually increases up to 8GB. I'm practising with Linux, but I still miss a lot to understand what's happening on my system. For instance, are there linux kernel logs saved somewhere that I can look at?
I have two processes that share a piece of memory, and i want to use the shared memory to send data from one process to the other. it's like a simple consumer-producer problem. when the producer fills the shared memory, it waits until the consumer can consume some data in the memory; the consumer needs to wait if there is no data in the memory. The thing gets complicated when both threads are allowed to sleep and wait for the other to wake it up.
i wanted to use condition variable of pthread for synchronization, but it doesn't work in multiple processes. i tried semaphore, but it's quite complicated and i still cannot make it right. I believe it's a common problem and someone should have written similar code before, or maybe the code is even wrapped in a library, but when I search for it on Internet, I only found information about how to share memory between processes. Does anyone know where I can find this kind of code or library?
This is my first post in these forums. I'm still quite new to Linux (using Mint 9) so please bear with my not-very-articulate question(s)When I boot up and open up a tty terminal I get a message saying "Memory corruption detected in low memory." I've done an extensive google search about the issue and it seems not uncommon. I ran a memtest with no errors returned, so I'm sure that there's nothing really wrong with the memory; apparently it's a bug in the kernel that's causing this.
I know of /etc/security/limits.conf and that can be used to limit all sorts of good things, but I haven't found anything that talks about using this when the users come from LDAP. Would I be able to do something like
@"Domain Users" soft nproc 25 @"Domain Users" hard nproc 40
where Domain Users is the group all users belong to in our system.
I am looking to buy some memory for my netbook. Currently I have 1 GB of DDR3 memory. However, the specification says that 2 GB of memory is the max. However, when I do the following it says that 4GB is the max:
I am looking for free database that has low memory usage and innodb and memory like engins that has C API and support trigger and client/server support for using in embedded linux systems.
I am new to C and linux. My code below does arbitary writes but I cant figure out where or how it does it.
I am calling the insertNode() function with seq = 'MISSISSPPI$' and alphabets = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ$'
Code:
Weird behaviour I should mention is that when I check for NULL pointer in node->child[index], the unassigned values are not null anymore, they point to arbitary memory.
we found that if we use 'top' to show the memory usage of a server (SuSe Linux 10), we can get virtual memory usage as well as 'Resident memory' usage. For virtual mem or a particular process, it is around 1.1GB, which is large but for resident memory, it only consumes 300MB. Are there anyone who knows what the differences are? I would also like to know whether the difference (1.1GB - 300MB) = 800MB are actually available for use by other applications in the system.
I am monitoring physical memory in a server I administer, and my hardware provider told me they had increased physical memory size to 4Gb... However, using several tools (free -m; top; dmesg | grep Memory; grep MemTotal /proc/meminfo I discovered that I actually have 3Gb, not 4... But, my doubt comes from the fact that dmesg | grem Memory tells me I have 3103396k/4194304k available The first number is effectively 3Gb, but the second one, is 4! so, why I am looking at this two different numbers?
I have had a fresh install of Ubuntu 9.10 and installed some software after that.Since third some, some process is eating half of my memory.I have checked processes running in system manager but everything is normal.Maximum is consumed by compiz which is about 26 mb, seems very normal.I did restarted my computer several times, and in the start for 5 mins, its fine after that again my cpu fans runs at very fast speed and my one cpu is used up 95 % (I have dual core).Please help me out, this invisible thing is driving me crazy.I am attaching my htop screen shot (sorted by cpu %), now the cpu is not used by completely but fan is still struggling hard and fast.
I am writing an application that wants to access periphals registers outside the standard (allowed) memory area.
Doing so gets me "segmentation fault".
I know, this is natural behaviour.
One way of getting around this is writing the module which has to be loaded by linux. I will consider this some time later.
For now, I want to come to some quick result and allow linux or gcc compiler to write to those memory areas of periphals. Is there a direct way to do so?
Is that possible that SHM shared memory is counted as cache memory on Linux with kernel 2.6.18?If find it really odd since this memory is not file backed, but I have a piece of code that loads data using shm_open+mmap, and it generates an amount of cache memory in /proc/meminfo that corresponds exactly to the amount of shared memory (I load that data from a file but I am using posix_fadvise(fd,0,0,POSIX_FADV_DONTNEED) to ensure this file is not cached and I made sure that it is working as expected). As far as I know SHM memory was not tagged as cache memory with kernel 2.6.9.If it is the case it is really unfortunate since normally cache memory can be considered to be part of the "available" memory since it can be flushed promptly but this is clearly not the case with SHM memory... Is there an easy way to get the total amount of used SHM memory on a system?
I have been setting up a vps I got out with bhost.net, with CentOS installed. I've been learning and have set up everying I need with the exception of ftp/sftp.
Using yum I installed vsftpd and ran into problems, thinking it was something I might of done I did a fresh install of CentOS and I still recieve the same problem on a fresh install so it is nothing I have done to the server.
The problem is when connecting via a sftp client I get an out of memory error. This error is listed in the putty faq ( url ) under A.7.5, there is a brief explaintion of the cure under A.7.6.
there is mention of a login script but I don't know where this is located. I'm a novice at Linux but by no means incompotent when it comes to computing.
I have a query regarding top & virtual memory. When we run top it show VIRT (Virtual Mem), RES (Resident Mem) & SHR (Shared Memory). The total virtual memory of my machine is 4 GBs (2 GB RAM + 2 GB Swap), but still I am able to see a process showing 4000m virtual memory column. what it means, as its show VIRT Mem more than actual available VIRT memory