OpenSUSE :: Created A Tab To Look That The Physical Memory In The System? Monitor Program?
May 11, 2010
I have a system with 1 GB RAM. I'm running KDE 4. I created a tab to look that the Physical Memory in the System Monitor program, which I assume appears to look at the same stats that "top" looks at. In that Physical Memory tab I have 3 tables: Used Memory, Free Memory, and Application Memory.The Used Memory table shows that the system is using .94 of .98 GiBytes. The Free Memory table shows that the system has .5 GiBytes of RAM free.
However the Application Memory shows that only 339 M-Bytes of RAM is being used.Note that "top" shows the same info.So where is the other .6 GiBytes of RAM that the Used Memory table shows as being used?If I look at the process table which is supposed to encompass all of the processes running, including the ones for the OS, then it appears to add up to the 339 M-Bytes being used in the Application Memory table. Is the rest of the memory being held in reserve by the OS to be used as needed? If so, then why when another application is opened the Free Memory goes down instead of staying constant?I also noticed this memory "black hole" when I was running 11.0 on a system with 4 GB of RAM. The OS appeared to "take up" a large chunk of memory that was NOT being used by any applications and making it "disappear" - meaning that the applications were using about 1.3 GiBytes of RAM and Free Memory was showing only .7 GiBytes instead of the over 2 GB of RAM that should be free.
I am monitoring physical memory in a server I administer, and my hardware provider told me they had increased physical memory size to 4Gb... However, using several tools (free -m; top; dmesg | grep Memory; grep MemTotal /proc/meminfo I discovered that I actually have 3Gb, not 4... But, my doubt comes from the fact that dmesg | grem Memory tells me I have 3103396k/4194304k available The first number is effectively 3Gb, but the second one, is 4! so, why I am looking at this two different numbers?
in this example, my memory 993.4 MiB memory is said to have 575.9 MiB of it used and 163.4MiB of my 2.8 GiB swap memory used. but in my processes tab, the most memory hogging program is 98.3 MiB, and Pidgin, 25.9 MiB, and 18.9 MiB, 14.9, 6.2,6.1,5.2,3.4,3.3,1.8,1.8,1.7, etc. I'm certain these don't add up to 575.9 MiB so where is all this extra memory usage coming from?
I am trying to run a simple perl program that requires getting stock price data from yahoo for just 1 ticker symbol, and it was running fine till this morning, wherein it froze and displayed the message: Out of memory!
I cleared my cache by running the following:
Code: $sync $sudo echo 1 |sudo tee /proc/sys/vm/drop_caches $sudo echo 2 |sudo tee /proc/sys/vm/drop_caches $sudo echo 3 |sudo tee /proc/sys/vm/drop_caches but it hasn't helped.
Even Firefox has been freezing, so I basically cannot do anything on my computer.
I need to monitor the amount of free physical memory on Linux from within a large C program. The sampling will occur very frequently, so the measurement cannot be performance intensive. The fact that Linux uses much of the theoretically free memory for cache and buffers means that just measuring the free pages is not sufficient. Using free + cache + buffers gives an overestimate as not all cache/buffers can be freed, but I could get a rough idea of how much generally can't and subtract that from the answer.
Possible options that I've come across so far are: Parsing /proc/meminfo - but that involves reading from file which is slow. Extracting the free, cache and buffers values from the output of the Free command - but is there a quick way to do this? Parsing the /proc/freemem file produced by the API here - but this is again reading from file. Is there a way to get that output directly? Speed is an extremely high priority, and the answer it must accurately represent the amount of memory that my program could expand into (to within a few Mb).
Basically I have a machine with 16GB of RAM and have just discovered that using all of it can crash the whole system over one process. How could I run a process on the system in such a way that if more than 90% of system memory is used, the process immediately crashes?
I'm a little bit confused with partitioning the filesystem in Linux. the difference between creating the file system with fdisk and mkfs (when formatting the disk). I can't clearly tell my problem, so please look at this picture:
I wondering does the evolution-alarm-notify and evolution-data-server-1.4 would remove from the system monitor or just leave them alone. I didn't want to touch them that would cause system diseaster, can you please confirm for both if say yes to remove that will be good safe.. I am running older version of Ubuntu 5.10 on my lappy.
My firefox browser takes too much memory that runs very slowest and I need to cut down the both program list above or what I need to remove some other program in the system monitor.
I'm searching a good suse monitoring application. I want to monitor performance(cpu,memory) also process,login to system, I want to run some automatically script when process down, send email.
I am trying to build and bring-up Linux (embedded) for a piece of hardware which have MIPS 74K proccessor 16MB Flash, 128MB DDR and network/usb support. How to configure/set into the kernel the exact addresses of the physical memory map? How does the kernel know where is the system ram, i/o memory, root FS? I have read some book and I found how the applications can go and read some special files like /proc/iomem to find out info about memory but what I need is how to set those addresses at the beginning when I build the kernel and FS in order to boot the kernel on my h/w.
When we want to setup a linux system, there is a common a suggestion like set the swap space as twice as big than your physical memory, I want to know why do we need this and how is this suggestion come from?
I am doing a test to get the memory used by apache`s apache2 processes. I used a script to get VmSize and VmRss from /proc/pid/status, and loop through that to get the sum of VmSize and VmRss of all the apache2 processes.
I found the VmSize (about 4GB) and VmRss (about 3.4GB) are much larger than the physical memory (1GB) when apache server was saturated. It was said because of the multi-counted libiraries size used by many processes simultaneously. Then , how to get the physical memory used by apache2 processes? Or how to get a more reasonable memory data?
I have a system with 2G of memory and swap memory of 4G.
This is the output from :
PHP Code:
How could they do to the memory cache to be used as much? Because, occasionally, swap is used and note that the system could use the memory cache does not swap ...
A process is trying one access to memory, for example through an array (ex.: vect[0]=123. What happens?
Here below what I guess but I'm not sure and accept any comment (please, distinguish between "the system" and "the CPU" in case).
Let's suppose swapping to disk disbled.
We have two scenarios: without and with cache.
If no cache is present in the system: 1. The CPU must discover the phys addr of vect[0] virtual addr. To do that, has to read from 3 (or 2 depending on the system?) pages tables, stored in memory as well. 2. The CPU writes to the final address.
These mean 4 memory accesses.
If cache is present: 1. Like above but, if the pages tables are in cache, we have 3 accesses to that. 2. If the req. page is not in cache, it's reads from ram and transferred to it. Afterwards, cache is written. In the best case we have 4 cache accesses.
I have a 32 bit Ubuntu installed and my Laptop has 4GB RAM, but only 3GB is considered by Linux. My question is: what is the reason for the upper limit on physical memory ?
Code: dmesg | grep Memory [0.000000] Memory: 3052428k/3112960k available (4673k kernel code, 56364k reserved, 2121k data, 656k init, 2200904k highmem) I am familiar with the virtual memory concept where linux splits upper 1GB for kernel and lower 3GB for user processes. In total, linux 32bit can address 4GB virtual addresses. Does this meant that 1GB of physical memory is already mapped to 1GB of kernel space and Linux only shows the remaining 3GB physical memory left for the user in the above command.
I did some searching on the internet and found some articles related to this, but it only confused me further since some articles suggest 4GB is the upper limit with mentioning whether it's virtual or physical memory, some bring in the concept of PAE, etc. I'm relative new to Linux's memory management, so it'd be really helpful if someone could answer this.
I allocated a chunk of memory using kmalloc in a Device Driver. Kmalloc provides a pointer to the allocated memory. This is one of my first few drivers.
I assume that the address returned is a Virtual address. I need to find the physical address of the memory location. I am working on an Intel 64 bit Fedora machine. I used the virt_to_phys() routine present in <asm/io_64.h>. I found that this routine returns an unsigned long value (32 bit) instead of an unsigned long long value (64 bit). Moreover, it seems that it simply returns the address - OFFSET instead of extracting the value in the page tables.
So is there any function / system call in Linux which will allow me to see the actual physical address on the Intel 64 arch.
As i undertsand - out of 1GB of the virtual Address space for Kernel from 3GB to 4GB of the process address space, Kernel image (code, data, bss, stack, heap) resides staring @0x0 address. Vmalloc area starts either at the end of Physical ram size or at 896M. This 896M cap is mandated to ensure that minimum of 128MB is reserved as vmalloc_reserve for vmalloc,kmap etc.
Is the understanding correct? Now trying to map Physical Zones into this 1GB address space
Initial 16MB is mapped to ZONE_DMA 16MB - 896MB is mapped to ZONE_NORMAL 896MB - 1024MB is mapped to ZONE_HIGHMEM
Does this mean that Kernel image is residing in ZONE_DMA area? Any call to vmalloc() in kernel code will return address beyond 896M? insmod of any LKM will internally invoke vmalloc() to obtain contiguous area - where will this code physically located along with rest of kernel code in ZONE_DMA or in ZONE_HIGHMEM?
Whenever I'm running my application process, I've 1M physical memory usage is increasing for every 2 hours.This I observed using 'free -m' command.But 'top' command did not showing any increase 'RSS' size.It is same as it was started initially.Even though I stopped my process,the increased memory was not released back. If I start my application process then again memory usage start increasing by 1M for every 2 hours. increase of memory usage observer with 'free' and that too when my application is running, but top command is not showing any change in the RSS sizeIf my application is leaking any memory which is allocated by new/malloc, that should be released back whenever my application exit and the size increase will be show through top command for that process, right? This is not happeningThis proves that there is no potential leaks in my process.But why physical memory is increasing when only my process is running?
We have a four socket amd machine, running barcelona processors, with 64gb ram.The system runs for extended periods just fine when the system is running up to or below the 64gb memory limit. A typical load on the machine has short periods where the machine uses heavy amounts of swap space (30+ Gb). We have a swap partition of around 96Gb. When we push the machine into heavy swapping, the machine will fail within 24hrs. Has anyone experienced this problem and is there a solution other than buying more physical memory? Or am I wrong and maybe the physical memory is the issue? I thought maybe it was the memory itself, and after stripping the memory down, I get the same problem...failure upon heavy swapping
openSUSE 11.2 installed on machine with 5GB memory but System Information in KDE desktop shows only 3GB total memory. Just added a further 4GB but no change shown in System Information.
Is there something I must do to have sysinfo report true value and does this mean that memory not shown is not being used?