I'm running into a problem where my system is running out of disk space on the root partition, but I can't figure out where the runaway usage is. I've had a stable system for a couple of years now, and it just ran out of space. I cleaned some files up to get the system workable again, but can't find the big usage area, and I'm getting conflicting results.For example, when I do a df it says I'm using 44GB out of 58 GB:
Code: [root@Zion ~]# df -h Filesystem Size Used Avail Use% Mounted on
I live in the boonies, so I have satellite internet. It's not too bad, but I'm restricted to 200 mb's of download per day.
I'm looking for an app that will keep track of my usage, so I don't go over 200. I was using "System Monitor", but it's a little buggy, so I'd like to try something else.
i want to use the command at to execute a script on a specific time FOR EXAMPLE :at 12:30pm but the script does not have to ececute at all, I DIT IT BUT THE SCRIPT EXECUTES EACH 12:30pm (that'sy problem) i want to write a script that will execute each two months from a specific time . for example: from january 12, 2010 the script has to run before march 12,2010
I used to have a program that displayed system information (cpu/ram usage, stuff like that) but the name escapes me at the moment. The key feature of this program is that it was intergrated into the desktop.
I am currently developing a program that i need to compare to other similar programs, mainly to provide a cost v. benefits analysis for myself and coworkers. does anyone know of a program that can accurately provide this information? or, otherwise, an idea of how to start coding?I have seen in research papers before that quickness was actually evaluated in seconds/microseconds taken for processes to finish- is this legitimate?
I wrote a program that multiplies 2 matrices using multi-threads and another one using multiple processes and shared memory. Both in C.I need to find the total memory usage of these programs. I know of the top command, but when my matrices are relatively small they don't even show up on top because they complete so fast, how can I find the memory usage for these instances?Also, how can I find the total turnaround time of my programs?
I have a java program that runs on Debian as a background processor. Yesterday the Java program stopped running. I looked at the memory usage, the system only had 5MB memory left, so my guess is that the java program ran out of memory to use.
However, after we restarted the java program, we could see that the free memory count started to go up. It kept going up from 5MB to over 400MB. The increase of memory happened slowly, when I measured it, I could see that with each minute passing by, there were a bit more memory added into the free memory pool, and meanwhile, the java background process was running.
I wonder why this would ever happen. It's as if our java program first brought the machine done because it consumed all the memories, then after restart, it starts to give back memories.
Since I own one of those Centrino based Core 2 notebooks that create annoying buzzing noises when idle (= entering C3 or C4 power saving states), I'm looking for a program that creates artificial CPU usage. It should allow me to limit the CPU usage to a certain percentage (I know that there are a lot of easy ways to create 100% usage ;-).
Another option would be to disable the C3 or C4 states, but in newer kernels the sysfs interface to set the max_cstate on-the-fly was removed for some reason, and I don't always want to reboot after switching from AC to battery (and vice versa).
Basically I have a machine with 16GB of RAM and have just discovered that using all of it can crash the whole system over one process. How could I run a process on the system in such a way that if more than 90% of system memory is used, the process immediately crashes?
I have a third party program (tightvnc) which I want to monitor and detect if it loses a connection with a client. I don't care if the client has the program open but isn't doing anything with it, I only want to know if the actual TCP connection is lost.
Since TCP takes forever to die on it's own I was thinking the best way to detect if a connection is lost is by bandwidth the bandwidth on the tcp port allocated to the VNC connection. Are there any tools built in to redhat (RHEL 5.2) which I could use to do this? Since I don't have full control of the operating system I would prefer to use built in tools rather then trying to get a new tool installed.
I am sure that all of us know the result of top command in linux. i want to get the value that the top command return as CPU usage, memory usage. so how do i do(programming relation)?
I was trying to get the status of memory usage and disk usage using sigar in windows and ubuntu. done this in windows by just copying the sigar library into jdk library. But i was unable to do so in ubuntu. I've copied the library to java-6-sun library but still can't run the program.
Is there any way to monitor one process' CPU usage and RAM usage over time on Linux? I am trying to change to a cheaper VPS and need to work out what level of CPU and RAM I need!
I've come across a really strange issue with one of my RHEL servers. The "free" command shows that 7019 MB of memory are actually in use by my system, but when summing up the actual usage (or even virtual usage like the example below) it doesn't add up - the sum is far less than what is reported by "free":
I recently installed Fedora 12 x64 on my laptop. Whenever I check "system monitor" the CPU usage is always 10% whether it is split between the two cores or done all on one. It keeps the CPU at 37degrees Celsius (body temp). I disabled crond from starting (I think it is similar to scheduled task in MS Windows), IP6Tables, sendmail, and smolt.
"System Monitor" tells me that only it itself is using 2%cpu but everything else is using 0%. If it makes any difference, I didn't install the ATI drivers because after installing the drivers, Fedora never gets past the boot screen. However, I read the instruction manual and it said that after installing you must run some configuration utility. I never did that so I will try that again on the weekend.
Does anyone one know of a free broadband usage meter for linux which will record the amount of uploads and downloads on the netwrok and alert you when the limit has reached? I was using TB meter on windows Vista.
I am running into a scenario where inode utilization (df -i)on a partition is 100% I want know
1) Is there a better way to list the total count of all files in a partition and just display total number of files in each directory in that partition I can get approx total for the entire partition by following commands
ls -Rla | wc -l find -type f | wc -l
whereas ls -Rla gives too lengthy outptut with all the files in each directory
2) How to know inode utilization for each user or system account? There is huge number of files and how to remove it
I'm trying to understand the performance of my machine and memory usage just isn't adding up. When I run top it will typically show 301M of 308M used but the total of everything in the RES column is no where near 300M and the total of %MEM column isn't more than 20-30%. So how do I figure out what is using all the memory? Then is there some way to control it to optimize performance?
Just picked up a 64g M4 SSD, bit small I know but wanted to have a play and try the SSD thing out. I am chasing partitioning suggestions. Problem is, you guessed it, space. As always with SSD's, space is at a premium. Formatted I am apparently going to end up with about 58gig usable. A disk usage analysis of my current Fedora 14 install on a 7200rpm drive gives me 30g of files in home, and about 15g to root.
Of that 30g of home files, 8g is tied up in Thunderbird alone, so was going to allocate about 45 to /home; and about 3g to swap. Problem is / (root) I have 8 gig tied up in /usr, and another 5 gig tied up in /var. Is this normal? Can I delete some of those files or will a fresh install of Fedora 15 blow out eventually to fill all that. I know I am trapped with /usr on the SSD but can I move /var to a 7200rpm instead of chocking up my teeny weeny ssd? What have other people partitioned their SSD's as?
Memory of my Linux database servce is all used up. I first noted that this morning and rebooted the box. 5 hours later, it saw used up again. I want to find out which process is responsible for using most of the memories. What Redhat Linux utility can list processes sorting by their memory usage, like the Windows task manager?free and vmstat - summary but not for each processtop appears to be infomative, but sum of non-zero %MEM never add upp to 100
top says there's only 12MB free (out of 1GB), but I can't figure out what's using all the RAM. rtorrent is using 13MB, and the rest are in bytes. (ran top as root)
I have this happening on my vaio laptop and FC11- top shows /usr/share/scripts/shared/onlyservice or /usr/share/logwatch/scripts/shared/onlyservice running %100 of one core of my CPU. It takes 30 to 40 min to stop. Same thing happens when I go to a GUI log-viewer. This, of course wouldn't matter on a desktop, but with a laptop it's kinda expensive. On boot up after the grub I get error: Invalid TSD data!
Having major problems with mysqld causing CPU usage to rocket to 100% on one core for a short time, which usually drops back down again to 1-7%, though sometimes it can last an extended period of time. This is a problem because things using mysql become unresponsive in this time.
I have Seamonkey set to play a .wav file when mail arrives. Since the latest update to pulseaudio, whenever mail arrives part of the sound plays, then one of my processor cores goes to 100% running pulseaudio. When I kill that process, the system returns to normal. This happens every time a sound is played.
ive finaly set up upmixing on my X-Fi (fedora 12) but now i can hear 6 channel sound in XMMS player, Mplayer but not Rhythmbox. On some forum i found that rhythmbox is using Gstream config, where i can find it in Fedora 12 and how can i make rhythmbox use my alsa 20to51 mod ?
Is anyone here fluent with the usage of the .htaccess file? Is it the way to go to deter search bots or is there a better method? Never mind. I already have a thread about .htaccess here.