General :: Getting The Multi-processes Memory Used?
Sep 2, 2010
The following code is for monitoring the memory used by apache processes. But I got a problem that the data I got by this script is much larger than the physical memory. It was said that there are some libraries used simultaneously by many processes, so the data I got has some double counted part. Because apache has many httpd processes.
Anyone have an idea of getting the multi-processes memory used?
I wrote a program that multiplies 2 matrices using multi-threads and another one using multiple processes and shared memory. Both in C.I need to find the total memory usage of these programs. I know of the top command, but when my matrices are relatively small they don't even show up on top because they complete so fast, how can I find the memory usage for these instances?Also, how can I find the total turnaround time of my programs
I found from command 'top' that 8GB memory are used. However, using command 'ps' with some options to grep the running processes and then summing up the memory used by the running processes are less than 2 GB. Where has the used memory gone ?
Few days ago, the server did not respond to a ssh request from a user at night. A user tried to check what went wrong with computer and tried to login from terminal next morning. As the computer was unresponsive, he somehow decided to boot it by turning the power off. To make the story short, the server rebooted; however, he can't login to his account. Actually, the server could not start some processes; but was able to ask user to enter his account username. Even though, he enters the correct username and password, server does not accept the request. I also could not login as root.
I just checked the server logs by booting it in single user mode. Here are some interesting lines:
Before the reboot:
irqbalance : can't balance irqs on a uniprocessor system: failed
After the reboot:
irqbalance : can't balance irqs on a uniprocessor system: failed fsck: fsck /: (this is repeated 900+ times)
I am doing a test to get the memory used by apache`s apache2 processes. I used a script to get VmSize and VmRss from /proc/pid/status, and loop through that to get the sum of VmSize and VmRss of all the apache2 processes.
I found the VmSize (about 4GB) and VmRss (about 3.4GB) are much larger than the physical memory (1GB) when apache server was saturated. It was said because of the multi-counted libiraries size used by many processes simultaneously. Then , how to get the physical memory used by apache2 processes? Or how to get a more reasonable memory data?
I work on Linux for ARM processor for cable modem. There is a tool that I have written (as the job demands) that sends/storms customized UDP packets using raw sockets. I form the packet from scratch so that we have the flexibility to play with different options. This tool is mainly for stress testing routers.I actually have multiple interfaces created. Each interface will obtain IP addresses using DHCP. This is done in order to make the modem behave as virtual customer premises equipment (vcpe).
When the system comes up, I start those processes that are asked to. Every process that I start will continuously send packets. So process 0 will send packets using interface 0 and so on. Each of these processes that send packets would allow configuration (change in UDP parameters and other options at run time). Thats the reason I decide to have separate processes. I start these processes using fork and excec from the provisioning processes of the modem. The problem now is that each process takes up a lot of memory. Starting just 3 such processes, causes the system to crash and reboot.
I have tried the following:-
1-I have always assumed that pushing more code to the Shared Libraries will. when I tried moving many functions into shared library and keeping minimum code in the processes, it made no difference to my surprise.
2-I also removed all arrays and made them use the heap. However it made no difference. This maybe because the processes runs continuously and it makes no difference if it is stack or heap?
3-I suspect the process from I where I call the fork is huge and that is the reason for the processes that I make result being huge. I am not sure how else I could go about. say process A is huge -> I start process B by forking and excec. B inherits A's memory area. So now I do this -> A starts C which inturn starts B will also not help as C still inherits A?. I used vfork as an alternative which did not help either. I do wonder why.
I am facing an issue where the process starts hanging. When I closely look at the logs I come to know that some of the child processes that are forked by the parent process are not finished.
1) Is it possible that the child processes that are not finished occupy the socket memory of the parent process and ultimately a point is reached where no socket memory is available to fork new child processes.
2) What is the standard limit of socket memory in linux?
3) What is the fate of such child processes (as I have mentioned above)?
4) How to debug such cases so that the exact problematic area is identified?
I am a bit confused with MPM. I read one the article here: [URL]. Still have few very basic doubts:
1. What exactly is a MPM, a module has a specific function to do whats the specific function of MPM?
2. What are the "multi processes" it handles? Is it connections? Quoting from the articles: "The main difference between MPMs and normal modules is that only one of the former can be used and multiple ones can be loaded in the latter".
3. There are multiple MPM but aren't they operate differently and may cause conflict when more than one is loaded and operating?
I just switch to fedora from windows recently. And I love the terminal of fedora alot. The problem is when I run some command on the terminal, I need to wait for that command to finish before executing another command. This is very inconvinient, say If I open eclipse using the terminal, this eclipse program will hog to the terminal until I closed it. So if I want to use terminal again I have to open another one.Hence the question is: Is there any way open multi processes(command) using only one terminal?
Few days ago, the server did not respond to a ssh request from a user at night. A user tried to check what went wrong with computer and tried to login from terminal next morning. As the computer was unresponsive, he somehow decided to boot it by turning the power off. To make the story short, the server rebooted; however, he can't login to his account. Actually, the server could not start some processes; but was able to ask user to enter his account username. Even though, he enters the correct username and password, server does not accept the request. I also could not login as root.
I just checked the server logs by booting it in single user mode. Here are some interesting lines:
Before the reboot: irqbalance : can't balance irqs on a uniprocessor system: failed
After the reboot: irqbalance : can't balance irqs on a uniprocessor system: failed
Using Ubuntu server 10.04.2 64-bit all up to date.
I am running multi-threaded processes. These use OpenMP in my own code and the multi-threaded ACML maths library. When run in the foreground, everything is fine i.e. if I have set
export OMP_NUM_THREADS=8
then when I start all 8 cores are in use and things whizz along. However, when running overnight and logged out using e.g. 'at now + 1 minute' then the command, I am only getting about 130% CPU and it slows down accordingly. I have tried renice'ing and calling from within a bash script in case sh is doing something odd but nothing seems to solve it. I am sure that in the recent past this wasn't the case.
The libraries being used are shared versions in case that might have any bearing.
I have written a simple script which has to find required patterns from a bunch of files ( where each file is around 2 GB each,which contain the output of seq 1 10000000000000) on an 8 core machine.I am current forking 6 child processes which run simultaneously on 6 cores of the processor & have to search for the required pattern in 6 different files & inform the parent process when a pattern is found using a PIPE.
The problem is,when a child process is done reading a text file looking for a pattern,it is becoming a zombie process.It exits cleanly when i put a $SIG{CHLD} = "IGNORE"; in the script.Can any one tell me whats going on & how do i improve the communication between child and parent processes?
I recently built a new server, running 64 bit Slackware 13.0 with the following specs:
MSI 785GTM-E45 AMD Phenom II X2 550 2GB DDR2 Onboard video from AMD 785G chipset 2x 80GB IDE system drives using software RAID with 2GB swap partition
I only include these because I'm not convinced my problem is not hardware related at some level. Basically, when I first start up the system, the memory usage is anywhere from 60 to 200MB. Then it starts to gradually climb until there is only 12-15MB free. This can take anywhere from a few hours to a few days.
The only thing I really use this for is to serve Samba shares and the occasional SSH login. I've set up Samba shares to be accessed by my Windows machines as well as a Mac. Initially I just explored the network if I wanted to see the share from these machines, but that was too slow and unreliable (the server would not always show up in Windows), so now I automatically mount the share as a network drive at startup (from Windows). So I don't know if this would have anything to do with the steadily increasing memory usage. These systems are not on/connected all the time, but the memory usage seems to rise anyway.
When I run top, it reports that nearly all of the physical memory has been consumed (after a while of uptime), but none of the swap space has even been touched. This is a typical output of the first several lines, sorted by swap size:
I can't be sure, but I believe the message just repeated itself before this part. MS-7549 is one of the components of the mainboard (MSI), maybe the northbridge or something (it seems to be associated with more than one model), so I thought for sure this meant a hardware problem, maybe with the storage controller or memory. But Memtest86 passes for more than a day without any errors, and I loaded Windows on one of the disks and ran it as hard as I could with prime95 and nothing so much as flinched. So I've got to think this is some kind of issue with linux and/or how it is handling my hardware configuration.
I work on Linux for ARM processor for cable modem. There is a tool that I have written (as the job demands) that sends/storms customized UDP packets using raw sockets. I form the packet from scratch so that we have the flexibility to play with different options. This tool is mainly for stress testing routers.
The details are here.
I actually have multiple interfaces created. Each interface will obtain IP addresses using DHCP. This is done in order to make the modem behave as virtual customer premises equipment (vcpe).
When the system comes up, I start those processes that are asked to. Every process that I start will continuously send packets. So process 0 will send packets using interface 0 and so on. Each of these processes that send packets would allow configuration (change in UDP parameters and other options at run time). Thats the reason I decide to have separate processes.
I start these processes using fork and excec from the provisioning processes of the modem.
The problem now is that each process takes up a lot of memory. Starting just 3 such processes, causes the system to crash and reboot.
I have tried the following:- 1-I have always assumed that pushing more code to the Shared Libraries will help. So when I tried moving many functions into shared library and keeping minimum code in the processes, it made no difference to my surprise.
2-I also removed all arrays and made them use the heap. However it made no difference. This maybe because the processes runs continuously and it makes no difference if it is stack or heap?
3-I suspect the process from I where I call the fork is huge and that is the reason for the processes that I make result being huge. I am not sure how else I could go about. say process A is huge -> I start process B by forking and excec. B inherits A's memory area. So now I do this -> A starts C which inturn starts B will also not help as C still inherits A?. I used vfork as an alternative which did not help either. I do wonder why.
reduce the memory used by each independent child processes.
I'm running several SHOUTcast server instances and a WowzaMediaServer instance on a CentOS machine. I'm experiencing a memory leak problem, but I can't figure out which processes are eating memory.
TOP command reports as follows:
[Code]...
Something misterious to me (I'm still a Linux newbie) is that TOP reports a total of 7.5GB used ram but very small percentage for single process (0-1%). Memory consumption starts at 1GB/8GB after reboot and in three days running gradually increases up to 8GB. I'm practising with Linux, but I still miss a lot to understand what's happening on my system. For instance, are there linux kernel logs saved somewhere that I can look at?
I have two processes that share a piece of memory, and i want to use the shared memory to send data from one process to the other. it's like a simple consumer-producer problem. when the producer fills the shared memory, it waits until the consumer can consume some data in the memory; the consumer needs to wait if there is no data in the memory. The thing gets complicated when both threads are allowed to sleep and wait for the other to wake it up.
i wanted to use condition variable of pthread for synchronization, but it doesn't work in multiple processes. i tried semaphore, but it's quite complicated and i still cannot make it right. I believe it's a common problem and someone should have written similar code before, or maybe the code is even wrapped in a library, but when I search for it on Internet, I only found information about how to share memory between processes. Does anyone know where I can find this kind of code or library?
I know of /etc/security/limits.conf and that can be used to limit all sorts of good things, but I haven't found anything that talks about using this when the users come from LDAP. Would I be able to do something like
@"Domain Users" soft nproc 25 @"Domain Users" hard nproc 40
where Domain Users is the group all users belong to in our system.
I am using malloc and frees a lot in my program. It shows its allocated but when i remove it doesnt show as the memory is removed(I am using the top command to view VIRT memory usage). If this continously grows what would happen to my program (Will it go out of memory?)
I have a computer with 16GB of ram. At the moment, top shows all the RAM is taken, (NOT by cache), but the RAM used by the various processes is very far from 16GB.I have seen this problem several times, but I don't understand what is happening.My only remedy so far has been to reboot the machine.
I have an ATI Radeon HD 3300 on-board video chipset, and an ATI Radeon HD 4350 PCI card. What I want is to have both displays available from one mouse/keyboard. I want to play media on one and have the other as my main desktop.The problem is that with Xinerama enabled, KDE desktop effects do not work (KDE says XComposite and XDamage are not available, even though I explicitly enabled them as extensions in the xorg.conf file), and performance is quite bad. Without Xinerama enabled, performance is great, desktop effects work great, but there's a lot of trouble with full-screen video, and the KWin window manager does not apply in the second display (although I can run a second instance of KWin on :0.1).
Anyone successfully using VNC client on a Mac to control a Debian server?I have the vncserver setup on the Debian machine properly. But I'm having problems connecting to it from both a PowerMac running Tiger and a MacBookPro running leopard.I can connect no problem from a machine running Slack12.2, have not setup port forwarding on my router to connect remotely yet.My Debian machine is running the latest stable release of squeeze with KDE4.I originally tried this with RealVNC Enterprise for OSX but I'm not gonna buy it so I need another alternative after the 30 day trial ends as they have no free version for OSX. The situation is that I do freelance graphic design on the PowerMac with Cinema4D and Photoshop so I spend most of my time on that machine which is located in my home studio in my attic. Aside from the MacBook and a Dell desktop(family machine)all my other machines and network hardware are in the basement. So to go from the attic to the basement everytime I need to do something on another machine is not practical, and the only other machine I need to access on a regular basis is the Debian box in the basement, this makes the most sense.
I also have a 14 year old living in the house and he's fascinated by all this and will meddle in anything he gets the chance to so all the Linux machines and network hardware need to be behind lock and key.
This is my first post in these forums. I'm still quite new to Linux (using Mint 9) so please bear with my not-very-articulate question(s)When I boot up and open up a tty terminal I get a message saying "Memory corruption detected in low memory." I've done an extensive google search about the issue and it seems not uncommon. I ran a memtest with no errors returned, so I'm sure that there's nothing really wrong with the memory; apparently it's a bug in the kernel that's causing this.
Now that Ubuntu 10.04 has multi-touch capabilities built-in, if I do not have a multi-touch screen or surface device, can I get 2 USB mice and get 2 pointers on the screen? One for the right hand and one for the left hand as I am ambidextrous, and would find it very convenient to have 2 mice.
Im running 64bit centos 5.6 and using virt-manager.On one of my guest OS's, Windows 7, The max Physical CPUs is 2, you can have unlimited CPU Cores however. (like my machine i use for work has 1, 4 core processor).The issue im having is xen only allows you to set the vcpu arguemnet in your xen config file. How can i set it so that 1 CPU has several Cores just as windows would recognize this machine if i were installing directly to the hardware vs via a VM.Ive searched for 2 days staright trying to address this issue, very little progress, Does anyone know where a XEN support forum is? all i get is the citrix xen support forums.
here is the best info i have found on this, but i dont know how to change this for my CPU to work, when i enter this in my xen config it essentially ignores it and just takes the value of vcpu= so windows shows 2 CPUs each with ONLY one core. Id like 1 or 2 CPUS showing Several cores.The physical Hardware is 2x Xeon 5300 Quad Core CPUs.
> # Expose to the guest multi-core cpu instead of multiple processors > # Example for intel, expose a 8-core processor : > #cpuid=['1:edx=xxx1xxxxxxxxxxxxxxxxxxxxxxxxxxxx,[code]........
I had this error when installing and running a vncserver before, which I have now removed. However, the xterm's seem to remain in the system and are regenerating themselves. Should the pid IDs stay the same each time I run this?
I need to create a small list of processes in a monitor.conf file. A shell script needs to check the status of these processes and restart if they are down. This shell script needs to be run every couple of minutes.
The output of the shell script needs to be recorded in a log file.
So far I have created a blank monitor.conf file. I have gotten the shell script to automatically updated every couple of minutes The Shell script also sends some default test information to the log file.
how I go about doing this part ? A shell script needs to check the status of these processes and restart if they are down.
I have put in the conf file the below commands but I am not sure if this is right.
ps ax | grep httpd ps ax | grep apache
I also dont know if the shell script should read from the conf file or if the conf file should send information to the shell script file.