Server :: "free" Shows Far More Memory Usage Than Summing Up Application Usage?
Aug 6, 2010
I've come across a really strange issue with one of my RHEL servers. The "free" command shows that 7019 MB of memory are actually in use by my system, but when summing up the actual usage (or even virtual usage like the example below) it doesn't add up - the sum is far less than what is reported by "free":
Code:
[root@server1 ~]# free -m
total used free shared buffers cached
Mem: 12011 7946 4065 0 4 23
-/+ buffers/cache: 7919 4092
[code]....
View 2 Replies
ADVERTISEMENT
Dec 10, 2010
Nagios had alerted me that the server had a very high load average exceeding the critical level (17+), when logging onto the server I found that all 4GB of the swap was in use despite the fact that there was 15GB+ of free memory (and that's not even including memory from cache and buffers!) Because it seems all heavily used pages were being stored in swap, the I/O wait on the server became very high, and 4 kswapd daemons were taking up nearly 100% available CPU. This did coincide with an error reported by Bacula during a backup job while changing to a bad tape...
From /var/log/bacula.log:
Code:
10-Dec 02:11 bacula-sd JobId 1898: End of medium on Volume "4097" Bytes=434,170,000,000 Blocks=217,084 at 10-Dec-2010 02:11.
10-Dec 02:11 bacula-sd JobId 1898: 3307 Issuing autochanger "unload slot 4097, drive 0" command.
10-Dec 02:12 bacula-sd JobId 1898: 3301 Issuing autochanger "loaded? drive 0" command.
10-Dec 02:12 bacula-sd JobId 1898: 3302 Autochanger "loaded? drive 0", result: nothing loaded.
10-Dec 02:12 bacula-sd JobId 1898: 3304 Issuing autochanger "load slot 4096, drive 0" command.
10-Dec 02:13 bacula-sd JobId 1898: 3305 Autochanger "load slot 4096, drive 0", status is OK.
10-Dec 02:13 bacula-sd JobId 1898: Volume "4096" previously written, moving to end of data.
10-Dec 03:51 bacula-sd JobId 1898: Error: Unable to position to end of data on device "Tape-1" (/dev/IBMtape0n): ERR=dev.c:1384 read e
rror on "Tape-1" (/dev/IBMtape0n). ERR=Input/output error.
10-Dec 03:51 bacula-sd JobId 1898: Marking Volume "4096" in Error in Catalog.
10-Dec 03:51 bacula-sd JobId 1898: 3307 Issuing autochanger "unload slot 4096, drive 0" command.
10-Dec 03:58 bacula-sd JobId 1898: 3301 Issuing autochanger "loaded? drive 0" command.
10-Dec 03:58 bacula-sd JobId 1898: 3302 Autochanger "loaded? drive 0", result: nothing loaded.
10-Dec 03:58 bacula-sd JobId 1898: 3304 Issuing autochanger "load slot 4098, drive 0" command.
10-Dec 03:58 bacula-sd JobId 1898: 3305 Autochanger "load slot 4098, drive 0", status is OK.
10-Dec 03:59 bacula-sd JobId 1898: Wrote label to prelabeled Volume "4098" on device "Tape-1" (/dev/IBMtape0n)
10-Dec 03:59 bacula-sd JobId 1898: New volume "4098" mounted on device "Tape-1" (/dev/IBMtape0n) at 10-Dec-2010 03:59.
At the same time, these messages starting occuring in /var/log/messages:
Code:
Dec 10 03:51:47 07 kernel: Mem-info:
Dec 10 03:51:47 07 kernel: Node 0 DMA per-cpu:
Dec 10 03:51:47 07 kernel: cpu 0 hot: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 0 cold: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 1 hot: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 1 cold: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 2 hot: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 2 cold: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 3 hot: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 3 cold: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 4 hot: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 4 cold: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 5 hot: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 5 cold: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 6 hot: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 6 cold: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 7 hot: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: cpu 7 cold: high 0, batch 1 used:0
Dec 10 03:51:47 07 kernel: Node 0 DMA32 per-cpu:
Dec 10 03:51:47 07 kernel: cpu 0 hot: high 186, batch 31 used:162
Dec 10 03:51:47 07 kernel: cpu 0 cold: high 62, batch 15 used:48
Dec 10 03:51:47 07 kernel: cpu 1 hot: high 186, batch 31 used:0
Dec 10 03:51:47 07 kernel: cpu 1 cold: high 62, batch 15 used:0
Dec 10 03:51:47 07 kernel: cpu 2 hot: high 186, batch 31 used:0
Dec 10 03:51:47 07 kernel: cpu 2 cold: high 62, batch 15 used:0
Dec 10 03:51:47 07 kernel: cpu 3 hot: high 186, batch 31 used:18
Dec 10 03:51:47 07 kernel: cpu 3 cold: high 62, batch 15 used:0
Dec 10 03:51:47 07 kernel: cpu 4 hot: high 186, batch 31 used:159
Dec 10 03:51:47 07 kernel: cpu 4 cold: high 62, batch 15 used:56
...
Dec 10 03:51:47 07 kernel: Node 3 HighMem per-cpu: empty
Dec 10 03:51:47 07 kernel: Free pages: 732052kB (0kB HighMem)
Dec 10 03:51:47 07 kernel: Active:4232128 inactive:3071288 dirty:158210 writeback:0 unstable:0 free:183320 slab:256840 mapped-file:289545 mapped-anon:3805487 pagetables:13063
Dec 10 03:51:47 07 kernel: Node 0 DMA free:10796kB min:4kB low:4kB high:4kB active:0kB inactive:0kB present:10356kB pages_scanned:0 all_unreclaimable? yes
Dec 10 03:51:47 07 kernel: lowmem_reserve[]: 0 3512 9067 9067
Dec 10 03:51:47 07 kernel: Node 0 DMA32 free:213332kB min:2500kB low:3124kB high:3748kB active:1794108kB inactive:1463220kB present:3596296kB pages_scanned:64 all_unreclaimable? no
Dec 10 03:51:47 07 kernel: lowmem_reserve[]: 0 0 5555 5555
Dec 10 03:51:47 07 kernel: Node 0 Normal free:41028kB min:3952kB low:4940kB high:5928kB active:3409444kB inactive:1471120kB present:5688320kB pages_scanned:0 all_unreclaimable? no
Dec 10 03:51:47 07 kernel: lowmem_reserve[]: 0 0 0 0
Dec 10 03:51:47 07 kernel: Node 0 HighMem free:0kB min:128kB low:128kB high:128kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
Dec 10 03:51:47 07 kernel: lowmem_reserve[]: 0 0 0 0
Dec 10 03:51:47 07 kernel: Node 1 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
...
Well to cut a long story short, I fixed the problem by disabling the swap partition with 'swapoff'. After about 30 mins all the swap was freed and the server went back to normal. I don't dare reactivate the swap partition and unfortunately as this is a live server which currently has no fail over, I can't reboot either
Server Spec:
4 * Dual-Core AMD Opteron(tm) Processor 8214
32GB DDR2 ECC RAM
RHEL 5.5, 2.6.18-194.11.3.el5 SMP x86_64
Running many KVM VMs (All CentOS x64) and kksmd is used.
bacula-dir Version: 5.0.0
IBM Tape Drive using lin_tape module version 1.34.0 according to modinfo
And before anybody asks
# sysctl vm.swappiness
vm.swappiness = 10
View 5 Replies
View Related
May 3, 2011
I am looking for free database that has low memory usage and innodb and memory like engins that has C API and support trigger and client/server support for using in embedded linux systems.
View 8 Replies
View Related
Jul 12, 2011
I'm trying to meter my memory usage. When I use these different programs, they all show me using different amounts of RAM.
View 2 Replies
View Related
Mar 15, 2010
I was trying to get the status of memory usage and disk usage using sigar in windows and ubuntu. done this in windows by just copying the sigar library into jdk library. But i was unable to do so in ubuntu. I've copied the library to java-6-sun library but still can't run the program.
View 14 Replies
View Related
Feb 14, 2009
We run Jboss app server that of course is all multithreaded under one JVM. I have couple of question regarding monitoring on per thread basis:
1. Is there a way to see which thread is bound to which CPU core?
2. Is there a way to see the CPU, Memory usage per thread? Something like prstat on Sun box which is real time and gives detailed information about threads per CPU
View 6 Replies
View Related
Jul 23, 2010
i have a c++ application that consumes a lot of run time memory. It is a very large project with a lot of sub-modules. My target is to reduce the runtime memory usage as much as possible. Therefore i would like to know if there is a tool that i can use to profile the code (note that i am not interested in checking for memory leaks / corruptions so Valgrind is not for my purpose) I need to know which module has the most static data like large arrays or a lot of variables to know where to start reducing.
View 3 Replies
View Related
Jan 13, 2009
I am sure that all of us know the result of top command in linux. i want to get the value that the top command return as CPU usage, memory usage. so how do i do(programming relation)?
View 3 Replies
View Related
Jun 14, 2011
I'm running a recursive DNS server on Ubuntu Server 10.04 64bits and Bind 9.7.0-p1 and having issues with memory usage. The named process memory usage keeps increasing from about 500Mb to 4Gig inside of a couple of weeks. If I don't restart Bind in the mean time the swap fills and performance gets very bad.
View 6 Replies
View Related
Jan 24, 2011
I use linux and Unix and I want to monitor the memory usage for process. To prevent memory leakage and out of memory of the system. Any command or sytnax , have more better and presentable data than below command about memory usage of one process ?
View 3 Replies
View Related
Feb 11, 2010
I have a VPS running a web application served using Apache, that on average deals with 20-50 requests per second. It's usually above this point (50 requests per second) that the amount of memory that Apache uses is too high for the VPS and errors start occuring - web pages crash and VPS falls over for a minute or two before going back to normal levels.
I believe that MaxClients is the best way to reduce the amount of RAM that Apache uses and I am planning to reduce MaxClients from 256 (default value) to around 100. Each Apache process uses ~15MB and the server has 1900MB of ram in total - the server does nothing else other than run Apache and a few crons.
Current setting are:
Code:
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 3
# prefork MPM
# StartServers: number of server processes to start
[Code].....
I tried reducing MaxClients before which lead to massive slowness, so I need some other options as well.
Does my suggestion of reducing MaxClients to ~100 seems sensible? What are my options if the server experiences slowness again - optimise the application? What's the best way to reduce memory usage - move images to another web server?
View 2 Replies
View Related
Sep 12, 2010
Top only show the memory usage for individual processes. Apache often runs hundreds of processes, each of which may use only a small amount of memory, however the total memory consumed by all apache processes can be fairly large.Is there a way to see the total memory usage for all apache processes?
View 7 Replies
View Related
Dec 13, 2010
We have a situation where we have to set up a server to send traps with information regarding CPU, memory usage, etc. I know snmpd can be set up to allow another process to request snmp information about the server, but can it be done the other way around (have a host send information about itself to another server through snmp)?
View 4 Replies
View Related
Feb 2, 2010
Last weekend i've set up my first headless ubuntu home file server and torrent downloader with ubuntu 9.10.Very cool but CPU is way too fast for a home server: P4 HT 2.8Ghz, unfortunatly it has only 256Mb of ram, so no X server and no VNC (old HP office pc) At the moment memory usage is only 40Mb without X server. Besides SSH works just fine Few questions i can't seem to find answers to on google:What is a good CL network monitoring program?mething similar to htopUbuntu 9.10 has a lot, about 20-30, console-kit deamon instanses running after boot each using some memory that i can't spare.
View 2 Replies
View Related
Mar 18, 2011
I have a computer with 16GB of ram. At the moment, top shows all the RAM is taken, (NOT by cache), but the RAM used by the various processes is very far from 16GB.I have seen this problem several times, but I don't understand what is happening.My only remedy so far has been to reboot the machine.
View 1 Replies
View Related
Feb 17, 2010
application to monitor application wise network usage?
View 3 Replies
View Related
Dec 10, 2010
I'm running into a problem where my system is running out of disk space on the root partition, but I can't figure out where the runaway usage is. I've had a stable system for a couple of years now, and it just ran out of space. I cleaned some files up to get the system workable again, but can't find the big usage area, and I'm getting conflicting results.For example, when I do a df it says I'm using 44GB out of 58 GB:
Code:
[root@Zion ~]# df -h
Filesystem Size Used Avail Use% Mounted on
[code]....
View 5 Replies
View Related
Nov 15, 2010
Is there any way to monitor one process' CPU usage and RAM usage over time on Linux? I am trying to change to a cheaper VPS and need to work out what level of CPU and RAM I need!
View 2 Replies
View Related
Apr 20, 2010
I am using malloc and frees a lot in my program. It shows its allocated but when i remove it doesnt show as the memory is removed(I am using the top command to view VIRT memory usage). If this continously grows what would happen to my program (Will it go out of memory?)
View 4 Replies
View Related
Sep 1, 2011
Is it possible to narrow down the ram command to give me JUST the free ram? Both commands that I know give me much more information that I would like to log.
Code:
free -m
This line gives me this. I really only want the one under "free"
Code:
total used free shared buffers cached
Mem: 1262 612 649 0 250 114
-/+ buffers/cache: 247 1014
Swap: 4010 6 4003
Code:
cat /proc/meminfo
This line gives me this. I really only want "MemFree"
Code:
MemTotal: 1292372 kB
MemFree: 636088 kB
Buffers: 279032 kB
Cached: 118768 kB
SwapCached: 532 kB
Active: 191408 kB
Inactive: 324684 kB .....
View 2 Replies
View Related
May 6, 2011
I have been running to a strange problem. For a few days, a few times a day something seems to take over my machine. Suddenly I can't type, or the key shows up about 5 seconds are being pressed, music players halt, etc. In anticipation, I had top and htop running. Sure enough, when this business begins htop reports all 8 cores firing at near 100% for a few minutes. However, top is telling me nothing about who is too blame. The cup%sy on top shows heavy usage, but the top processes are nothing remarkable, mostly just top and htop and conky, all running at around 1 or 2% on average. I am a bit puzzled as to why I can't see the cpu hog so that I figure out what is going on. My question then: Are there some processes (in kernel space?) that don't show up in top, yet which could be hogging the cpu?
View 3 Replies
View Related
May 11, 2010
Is this normal? I check system monitor and the top most is xorg followed by compiz. When I first started with 10.04 few days ago it was around 300-500MB.
View 3 Replies
View Related
Jan 11, 2009
i am wondering How to get usage of CPU, Memory in linux environment? So would you able to tell me the ways?
View 10 Replies
View Related
Jun 2, 2010
I am having a few problems with a red hat box involving memory usage. I have 64Gb memory and 'top' tells me I'm using 60Gb of it, but if I add up all the '%MEM' figures I get no more than 20%. Where is the other 80% ?
We have an ORACLE instance that is using shared memory but this is ceilinged at 45Gb. That means there is about 15Gb unnaccounted for . What utilities can I use on red hat to ascertain memory usage other than 'top' ? Any better ones, more detailed, looking at shared memory etc and swap ?
View 2 Replies
View Related
May 27, 2010
is there a command that shows current network usage, similar to the way htop shows memory and processor usage?
View 3 Replies
View Related
Sep 17, 2009
I'm trying to understand the performance of my machine and memory usage just isn't adding up. When I run top it will typically show 301M of 308M used but the total of everything in the RES column is no where near 300M and the total of %MEM column isn't more than 20-30%. So how do I figure out what is using all the memory? Then is there some way to control it to optimize performance?
View 3 Replies
View Related
Apr 13, 2011
Under SuSE, (Mem: 31908592k total, 31421504k used,) how do i know which process or program using my memory?
View 2 Replies
View Related
Apr 15, 2011
I am a bit worried about my linux vserver box. No more memory is left. To investigate this issue, i was looking at "top". But it deeply confuses me. It seems that no more memory is left, altough the process list in top never adds up to 100%
[Code]...
View 4 Replies
View Related
Jul 11, 2010
My problem seems to be very simple, it's high memory usage. I occasionally will use movie player to watch a few shows and I use firefox as well. My memory usage starts out real small about 500 mb but after using firefox lightly and movie player it jumps to almost 2 gigs and this is after they've been closed what gives? I've attached an image so you can see what I'm talking about.
View 7 Replies
View Related
Aug 16, 2010
I've been having some problems with Lucid; all my applications seem to be hogging memory like no tomorrow. Within about 15 minutes from booting the system, processes like Google Chrome, Nautilus, Python, Pidgin all start to take seemingly too large amounts of memory.
Chrome is the worst one, easily shooting over 200-300MB of my 2GB's of RAM. I would have reported this as a bug in Chrome itself, but my other applications seem to share the problem to some extent. Also: My colleague has identical hardware and identical versions of Ubuntu / Chrome, while he has no memory problems whatsoever.Currently I am running Chrome, Geany, Pidgin, Thunderbird and FileZilla. For this and itself, Ubuntu now consumes 1.8GB of RAM (that's including 500MB cached).
View 7 Replies
View Related