Ubuntu :: Make A Program Crash If System Memory Usage Gets Too High?
Jul 19, 2010
Basically I have a machine with 16GB of RAM and have just discovered that using all of it can crash the whole system over one process. How could I run a process on the system in such a way that if more than 90% of system memory is used, the process immediately crashes?
My problem seems to be very simple, it's high memory usage. I occasionally will use movie player to watch a few shows and I use firefox as well. My memory usage starts out real small about 500 mb but after using firefox lightly and movie player it jumps to almost 2 gigs and this is after they've been closed what gives? I've attached an image so you can see what I'm talking about.
I've installed my debian sid about one month ago (first xfce, next gnome) but noticed that it's kind of really slow. The upgrades take ages, launching (and using) firefox takes so much time,... In comparaison to my ubuntu, archlinux (on the same computer) or previous installation of debian there is clearly a problem somewhere.Today I tried to do a "top" sorted by mem usage : 3.5% xulrunner-stub, 2.1% dropbox, 1.4% aptitude (doing upgrade), 1.4% clementine,... nothing terriblebut still I've 2.7Gb or RAM used (more than 50%)
$ free -m total used free shared buffers cached Mem: 3967 26851282 0 79 1938
I upgraded from Fedora 13 to 14 over the network. Everything seems to have worked. The one problem after my install is that I have noticed that setroubleshootd consumes alot of memory.
[Code]
It doesn't take long for setroubleshootd to jump in memory usage. I can kill the process but it will start up again. I have tried disabling the service but it doesn't show up in /etc/init.d. # service setroubledshootd stop setroubledshootd: unrecognized service So I am not sure what I can do to resolve the issue with setroubleshootd besides killing it off every 15 minutes.
In our database, when checking the memory used using top command, we are always seeing 32 GB RAM utilized. We have set the sga_max_size to 8gb and PGA to 3 gb. We have tried shutting down oracle db and then the memory went down to 24GB when checked using top command. After cold reboot of the DB server, it gone down to 1.5 GB.
But once the users are started using after end of day, the memory again gone back to 32 GB.
i am having a problem that i would call a bit "important" with my server. so, from last 3 weeks the used space of my hard disk (RAID I) started growing up. i have 2 x 1 tb HDD working on RAID I and i did not install anything those weeks. the space just started changing from 90 GB till 580 GB. now the situation is stable there but i think it's not normal.
the bandwidth usage is low (like 120 gb in 2 months) and i am running 6 counter strike gameservers, a forum, a very little website and some local stuffs... a friend of mine told me that my server could have been hacked but i am afraid it did... some useful informations: when i reboot the server the used space goes down again to ~100 GB and then it starts going up again. i cant really find where all those files are located:
I recently upgraded to Ubuntu 10.10 and I am experiencing an ultra high memory usage of the gnome-settings-daemon of 2GB after suspend! Killing and restarting the daemon solves the issue. Anybody else with this behavior?
I was browsing my folder with lots of images, after finished i close nautilus and i notice that my computer became slow, so i'll check it with system monitor and had found that nautilus are using almost 100mb of ram (opening 4 tabs). I'm not sure if this was normal or not because i try to reopen the same folder with pcmanfm and it only consumes less than 20mb of ram (opening 4 tabs). here's the screenshot from system monitor .
My problem is extremely slow write on hard disk and 100% cpu usage and it happens when I want to write something on the hard derive not any other external derive.
Tried a fresh ubuntu install. No change. I am not even sure if it is a software or hardware problem.
I have been using ubuntu for 4 years now on my decent laptop with 2 Gb RAM, dual core centrino, etc etc. Yet, in all those years I have been using this superior OS, I still have to do hard shutdowns because some program runs wild. I have lately 2 scenarios where I have to interfere with the process:
1: amarok crashes and leaves the python script for the gnome shortcut keys running at 100% CPU. Or: thunderbird-bin keeps running after apparently clean close of Thunderbird. That's not really bothering me, I just kill both processes.
The bigger problem is scenario 2: 2: VLC starts eating all my RAM (for no reason), my SWAP starts filling and my computer becomes unusable for 10 minutes. Or: my matlab script is too big and eats up too much RAM -> same. Note: I have nothing against SWAP because at many other times it's very useful
These are stupid and annoying problems where there is an easy solution:
1) automatically kill the stupid process that runs at 100% CPU 2) automatically kill the stupid process that eats up all my RAM
I wrote a program that multiplies 2 matrices using multi-threads and another one using multiple processes and shared memory. Both in C.I need to find the total memory usage of these programs. I know of the top command, but when my matrices are relatively small they don't even show up on top because they complete so fast, how can I find the memory usage for these instances?Also, how can I find the total turnaround time of my programs?
We are using wordpress with MYSQL. Both the app and DB server are different with 6CPU and 6GB RAM and 32bit processor. We had noticed recently that mysqld process is taking too much of system usage- ranging upto 100% of CPU utilization while having the load of 1200-1600 concurrent users.
Pasting d my.cnf file- # The following options will be passed to all MySQL clients [client] port= 3306
I have a java program that runs on Debian as a background processor. Yesterday the Java program stopped running. I looked at the memory usage, the system only had 5MB memory left, so my guess is that the java program ran out of memory to use.
However, after we restarted the java program, we could see that the free memory count started to go up. It kept going up from 5MB to over 400MB. The increase of memory happened slowly, when I measured it, I could see that with each minute passing by, there were a bit more memory added into the free memory pool, and meanwhile, the java background process was running.
I wonder why this would ever happen. It's as if our java program first brought the machine done because it consumed all the memories, then after restart, it starts to give back memories.
I have just started to have a problem with Xorg it is always using at least 30% of my CPU, and the whole system does not run smooth so if I play a video it does not run smooth, it judders, also even if I drag an icon it judders across the screen. Im running Ubuntu 10.10 2.6.35-25-generic x86_64 VGA compatible controller: nVidia Corporation G98M [GeForce G105M] (rev a2)
I have a server running samba process and there are about 70 samba users connected at a time. The system has 4Gb of memory and it seems each samba process is utilizing only 3352Kb of memory. When I run the command pmap -d (pid of samba)
But when I run the top command, it results as below: Tasks: 163 total, 1 running, 162 sleeping, 0 stopped, 0 zombie Cpu(s): 0.9% us, 4.9% sy, 0.0% ni, 93.3% id, 0.8% wa, 0.2% hi, 0.0% si Mem: 3895444k total, 3163192k used, 732252k free, 352344k buffers Swap: 2097144k total, 208k used, 2096936k free, 2487636k cached
Why could the system be utilizing such high memory? By the way, the server is not running other processes. The samba version running in it is 3.0.33-0.17.
in this example, my memory 993.4 MiB memory is said to have 575.9 MiB of it used and 163.4MiB of my 2.8 GiB swap memory used. but in my processes tab, the most memory hogging program is 98.3 MiB, and Pidgin, 25.9 MiB, and 18.9 MiB, 14.9, 6.2,6.1,5.2,3.4,3.3,1.8,1.8,1.7, etc. I'm certain these don't add up to 575.9 MiB so where is all this extra memory usage coming from?
I use a Debian Squeeze system running off a flash drive, i.e. based on a custom Live image running in persistent mode. It runs great and I am grateful for the existence of Debian . However, I have a question. A lot of the machines I use this pen drive on are quite old, often with 512 MB RAM and old processors. I specifically built my system using XFCE and lightweight apps off an initial live image using the standard-x11 package list (basically just Xorg with drivers and the base system). At first things ran very well, blazing fast even on the oldest systems and could comfortably run Firefox along with LibreOffice side by side (I need LO as all of my colleagues use Word docs, often with track changes, which Abiword can't handle properly). However, over time, I've found that memory usage has risen, tot he point where Firefox is now automatically killed on the older systems every time I start LibreOffice.how does one figure out why memory usage is going up? I've checked for inessential services and turned them off with "insserv -r". I've used only lightweight apps, as mentioned before. Are there other general tips on reducing memory usage?
My server is keep on hanging So I have rebooted several times in the last couple of weeks, the system is eating more memory and the usage is keep on increasing and at particular time it became saturated and my server hungs. I could not find which process is eating more memory. I have used the below commands to check if any process is eating more memory but no luck. No such process are using high memory.
I was trying to get the status of memory usage and disk usage using sigar in windows and ubuntu. done this in windows by just copying the sigar library into jdk library. But i was unable to do so in ubuntu. I've copied the library to java-6-sun library but still can't run the program.
I used to have a program that displayed system information (cpu/ram usage, stuff like that) but the name escapes me at the moment. The key feature of this program is that it was intergrated into the desktop.
I have a computer with 16GB of ram. At the moment, top shows all the RAM is taken, (NOT by cache), but the RAM used by the various processes is very far from 16GB.I have seen this problem several times, but I don't understand what is happening.My only remedy so far has been to reboot the machine.
I am looking for free database that has low memory usage and innodb and memory like engins that has C API and support trigger and client/server support for using in embedded linux systems.
I am trying to run a simple perl program that requires getting stock price data from yahoo for just 1 ticker symbol, and it was running fine till this morning, wherein it froze and displayed the message: Out of memory!
I cleared my cache by running the following:
Code: $sync $sudo echo 1 |sudo tee /proc/sys/vm/drop_caches $sudo echo 2 |sudo tee /proc/sys/vm/drop_caches $sudo echo 3 |sudo tee /proc/sys/vm/drop_caches but it hasn't helped.
Even Firefox has been freezing, so I basically cannot do anything on my computer.
I have a system with 1 GB RAM. I'm running KDE 4. I created a tab to look that the Physical Memory in the System Monitor program, which I assume appears to look at the same stats that "top" looks at. In that Physical Memory tab I have 3 tables: Used Memory, Free Memory, and Application Memory.The Used Memory table shows that the system is using .94 of .98 GiBytes. The Free Memory table shows that the system has .5 GiBytes of RAM free.
However the Application Memory shows that only 339 M-Bytes of RAM is being used.Note that "top" shows the same info.So where is the other .6 GiBytes of RAM that the Used Memory table shows as being used?If I look at the process table which is supposed to encompass all of the processes running, including the ones for the OS, then it appears to add up to the 339 M-Bytes being used in the Application Memory table. Is the rest of the memory being held in reserve by the OS to be used as needed? If so, then why when another application is opened the Free Memory goes down instead of staying constant?I also noticed this memory "black hole" when I was running 11.0 on a system with 4 GB of RAM. The OS appeared to "take up" a large chunk of memory that was NOT being used by any applications and making it "disappear" - meaning that the applications were using about 1.3 GiBytes of RAM and Free Memory was showing only .7 GiBytes instead of the over 2 GB of RAM that should be free.
I need to monitor the amount of free physical memory on Linux from within a large C program. The sampling will occur very frequently, so the measurement cannot be performance intensive. The fact that Linux uses much of the theoretically free memory for cache and buffers means that just measuring the free pages is not sufficient. Using free + cache + buffers gives an overestimate as not all cache/buffers can be freed, but I could get a rough idea of how much generally can't and subtract that from the answer.
Possible options that I've come across so far are: Parsing /proc/meminfo - but that involves reading from file which is slow. Extracting the free, cache and buffers values from the output of the Free command - but is there a quick way to do this? Parsing the /proc/freemem file produced by the API here - but this is again reading from file. Is there a way to get that output directly? Speed is an extremely high priority, and the answer it must accurately represent the amount of memory that my program could expand into (to within a few Mb).
I am sure that all of us know the result of top command in linux. i want to get the value that the top command return as CPU usage, memory usage. so how do i do(programming relation)?