Ubuntu :: Vlc Using High Amounts Of Memory?
Apr 30, 2011I just installed Kubuntu 11.04.
I ran VLC and it brought my PC to a halt.
I attached a picture of memory usage from htop.
Why do you think VLC is using so much system resources?
I just installed Kubuntu 11.04.
I ran VLC and it brought my PC to a halt.
I attached a picture of memory usage from htop.
Why do you think VLC is using so much system resources?
I am monitoring physical memory in a server I administer, and my hardware provider told me they had increased physical memory size to 4Gb... However, using several tools (free -m; top; dmesg | grep Memory; grep MemTotal /proc/meminfo I discovered that I actually have 3Gb, not 4... But, my doubt comes from the fact that dmesg | grem Memory tells me I have 3103396k/4194304k available The first number is effectively 3Gb, but the second one, is 4! so, why I am looking at this two different numbers?
View 1 Replies View RelatedMy problem seems to be very simple, it's high memory usage. I occasionally will use movie player to watch a few shows and I use firefox as well. My memory usage starts out real small about 500 mb but after using firefox lightly and movie player it jumps to almost 2 gigs and this is after they've been closed what gives? I've attached an image so you can see what I'm talking about.
View 7 Replies View RelatedMost of my many Linux installs boot up to a memory use of 170-190 MB with no open or running programs. But with Ubuntu 9.10 it shows; 305-310 MB and the top RAM use item is the; compiz.real, which I think is a display desktop effects application. I have the nvidia video card driver installed, and that nvidia X-Server configuration tool. Also I can not find where that compiz options is at, ( the display options in system, preferances does not open, and says to use my video card tool ).
View 7 Replies View RelatedI have a kernel module that I am trying to do a phys_to_virt() and then perform a memcpy to the returned virtual address. The problem that I'm having is that my module crashes during the memcpy when trying to do this on a physical address in high memory. Do I need to need to perform some kind a mapping operation on high memory address for this to work?
View 1 Replies View RelatedI've installed my debian sid about one month ago (first xfce, next gnome) but noticed that it's kind of really slow. The upgrades take ages, launching (and using) firefox takes so much time,... In comparaison to my ubuntu, archlinux (on the same computer) or previous installation of debian there is clearly a problem somewhere.Today I tried to do a "top" sorted by mem usage : 3.5% xulrunner-stub, 2.1% dropbox, 1.4% aptitude (doing upgrade), 1.4% clementine,... nothing terriblebut still I've 2.7Gb or RAM used (more than 50%)
$ free -m
total used free shared buffers cached
Mem: 3967 26851282 0 79 1938
[code]....
I have a server running samba process and there are about 70 samba users connected at a time. The system has 4Gb of memory and it seems each samba process is utilizing only 3352Kb of memory.
When I run the command
pmap -d (pid of samba)
It gives as:
b7ffa000 4 rw-s- 0000000000000000 0fd:00003 messages.tdb
bfe46000 1768 rw--- 00000000bfe46000 000:00000 [ stack ]
ffffe000 4 r-x-- 0000000000000000 000:00000 [ anon ]
mapped: 33384K writeable/private: 3352K shared: 20504K
But when I run the top command, it results as below:
Tasks: 163 total, 1 running, 162 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.9% us, 4.9% sy, 0.0% ni, 93.3% id, 0.8% wa, 0.2% hi, 0.0% si
Mem: 3895444k total, 3163192k used, 732252k free, 352344k buffers
Swap: 2097144k total, 208k used, 2096936k free, 2487636k cached
Why could the system be utilizing such high memory? By the way, the server is not running other processes. The samba version running in it is 3.0.33-0.17.
I upgraded from Fedora 13 to 14 over the network. Everything seems to have worked. The one problem after my install is that I have noticed that setroubleshootd consumes alot of memory.
[Code]
It doesn't take long for setroubleshootd to jump in memory usage. I can kill the process but it will start up again. I have tried disabling the service but it doesn't show up in /etc/init.d. # service setroubledshootd stop setroubledshootd: unrecognized service So I am not sure what I can do to resolve the issue with setroubleshootd besides killing it off every 15 minutes.
In our database, when checking the memory used using top command, we are always seeing 32 GB RAM utilized. We have set the sga_max_size to 8gb and PGA to 3 gb. We have tried shutting down oracle db and then the memory went down to 24GB when checked using top command. After cold reboot of the DB server, it gone down to 1.5 GB.
But once the users are started using after end of day, the memory again gone back to 32 GB.
Top Command output (truncated)
======================
top - 15:18:27 up 5 days, 19:43, 1 user, load average: 0.55, 0.39, 0.32
Tasks: 599 total, 3 running, 596 sleeping, 0 stopped, 0 zombie
Cpu(s): 1.5%us, 0.4%sy, 0.0%ni, 97.9%id, 0.1%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 32949628k total, 32823336k used, 126292k free, 238808k buffers
[Code]....
I have an Ubuntu server running in our small office. Among its many duties is report generation. It uses PHP and DOMPDF (a PHP library for converting HTML/CSS to PDFs for printing). PHP's default memory limit of 32MB is not even close to being enough to pull large amounts of data from the database and generate images/tables/PDFs with that data.
I increased the memory limit to 64MB and that is adequate for reports under 3 pages or so (varies based on table complexity, images, etc). If any user tries to generate a report longer than that, PHP just throws a "out of memory" error and doesn't make the report.
My question is: what are the possible consequences of increasing the memory limit yet again to 128MB or maybe even higher? The server isn't terribly powerful. It has 2GB RAM and 4GB swap space. I know that isn't much but this is a small office and at most I can only see two or three people trying to run reports at the same time. As for security, apache is currently only serving pages in the local network, but sometime within the next year I'll probably have it hosting a public website (currently using a hosting service). Is a high memory limit a potential security risk when exposed to the internet?
EDIT: Sorry, PHP's default memory limit is 16 not 32 as I said. Question still stands, however
Basically I have a machine with 16GB of RAM and have just discovered that using all of it can crash the whole system over one process. How could I run a process on the system in such a way that if more than 90% of system memory is used, the process immediately crashes?
View 3 Replies View Relatedi am having a problem that i would call a bit "important" with my server. so, from last 3 weeks the used space of my hard disk (RAID I) started growing up. i have 2 x 1 tb HDD working on RAID I and i did not install anything those weeks. the space just started changing from 90 GB till 580 GB. now the situation is stable there but i think it's not normal.
the bandwidth usage is low (like 120 gb in 2 months) and i am running 6 counter strike gameservers, a forum, a very little website and some local stuffs... a friend of mine told me that my server could have been hacked but i am afraid it did... some useful informations: when i reboot the server the used space goes down again to ~100 GB and then it starts going up again. i cant really find where all those files are located:
[Code]...
I recently upgraded to Ubuntu 10.10 and I am experiencing an ultra high memory usage of the gnome-settings-daemon of 2GB after suspend! Killing and restarting the daemon solves the issue. Anybody else with this behavior?
View 6 Replies View RelatedI am having a php script which is used for bulk mailing. I run the script every minute through cron job. I have mentioned the path of the php script in a .sh file and execute the .sh file through cron job. Every time i run the script it utilizes high memory which results to server crash. how to restrict memory usage for that process to be a minimum one or how to set priority to be low for that process so that it is executed when there is no high priority jobs, so that the server runs normally without going down when the script runs
View 3 Replies View RelatedI was browsing my folder with lots of images, after finished i close nautilus and i notice that my computer became slow, so i'll check it with system monitor and had found that nautilus are using almost 100mb of ram (opening 4 tabs). I'm not sure if this was normal or not because i try to reopen the same folder with pcmanfm and it only consumes less than 20mb of ram (opening 4 tabs).
here's the screenshot from system monitor .
I am trying to move a large amount of files (over 30k and 86GB) to another HDD but I get a Augment list too large error?? I tried rsync, cp, mv and still the same error
View 1 Replies View RelatedI've been using GIMP's 'save for the web' tool to reduce the file sizes of images.
I now have a directory with about 50 images. I'd like to avoid processing them all by hand.
I have a (very) basic knowledge of programming, and I'm comfortable with the commandline. I don't mind doing some homework on how to use new tools.
All I'm really concerned with here is reducing the file sizes of the images I have.
What possible pathways are there for me to prepare large amounts of images for the web?
In my cpanel, it reports the RAM on my server as:
Physical used 17.31%
768MB Free (928MB Total)
yet when I use top I only have 88Mb free:
Code:
Mem: 950568k total, 862464k used, 88104k free, 91072k buffers
Swap: 522072k total, 136k used, 521936k free, 604752k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
[code]....
The company that I work for has massive amounts on our file server and that amount continues to grow. What we are looking for is a search appliance that will make it easier to search all documents on the file server and also search the content of those documents. I don't really like the idea of everyone using an app like X1 and searching the share drives that way on their individual PC. I would like a search appliance.
View 5 Replies View Relatedi have a car stereo that reads a USB drive with all my music on it, however to sort through the music it uses a method of finding folders containing music, then displaying them all in a list. i find this interface annoying because in order to sort the music by artist i have to go and manually move it out of the album folders by hand, this takes a long time for 11+ GB of music so i was trying to use the linux CLI to quicken the process. use a command like this
Code:
mv /media/usb/music/*/*/* /media/usb/music/*/
but for some reason this moves all my music into the last folder alphabetically in my drive, the music is all pre-arranged like this /media/usb/music/artist/album/song
Are there any tools out there that let me select a bunch of data and burn it to multiple cd's or DVD's? I'm using k3b but have to manually select cd and dvd size amounts.
View 1 Replies View RelatedSometimes I need to copy a huge directory to another directory (local filesystem), and usually I will use the "cp" or "rsync" commands. These commands are good, but depending on the size of the data being copied, the copy is painfully slow. I realize we are limited because of the hardware we have with it's limitations, ie, I/O speed, and the filesystem (which is usually ext3). Are there any other utilities that maybe not well known, but can handle copying large amounts of data? (mostly in the TB range)
View 2 Replies View RelatedI have a small office network (about 30 machine) with linux gateway (6Mbps internet bandwidth). Every user get only 500Kbps bandwidth, and they use the internet very poor. The internet getting slow lately, and I noticed that there are huge amount of small packets (78 byte, 48 byte) coming to linux machines. My question is: How can I solve which machine(s) sending those small packets? Do you have any ideas with netstat command?
View 1 Replies View Relatedwe've been trying to become a bit more serious about backup. It seems the better way to do MySQL backup is to use the binlog. However, that binlog is huge! We seem to produce something like 10Gb per month. I'd like to copy the backup to somewhere off the server as I don't feel like there is much to be gained by just copying it to somewhere else on the server. I recently made a full backup which after compression amounted to 2.5Gb and took me 6.5 hours to copy to my own computer ... So that solution doesn't seem practical for the binlog backup.Should we rent another server somewhere? Is it possible to find a server like that really cheap? Or is there some other solution? What are other people's MySQL backup practices?
View 8 Replies View RelatedWe're load testing some of our larger servers (16GB+ RAM), and when memory starts to run low they are kicking off the oomkiller instead of swapping. I've checked swapon -s (which says we're using 0 bytes out of 16GB of swap), I've checked swappiness (60), I've tried upping the swap to 32GB, all to no avail. If we pull some RAM, and configure the box with 8GB of physical ram and 16 (or more) GB of swap, sure enough it dips into it and is more stable than a 16GB box with 16 or 32GB of swap.
View 6 Replies View RelatedThe issue I am currently facing is more of an annoyance / curiosity, and it may not even be a problem, but it sure feels like one. Background: I am becoming a computational chemist (grad school begins in the fall) and the code I run is all in fortran. I am currently compiling with gfortran. When I compile the code (on a box running ubuntu server), everything appears to compile fine, but the linking stage is taking five to ten minutes. I ran the time command while making it and got the following results.
Quote:
time make
gfortran -c -O3 -fomit-frame-pointer -finline-functions -ffast-math suijtab.f
Linking testCompile ...
done
[code]...
I just don't understand why it is taking 5 minutes of real time if it only takes 10 - 15 sec of system time?
Though under Windows my Internet connection works fine and fluent under Debian linux it connects and disconnects repeatedly at short amounts of time. It didn't used to be like that but at a certain time i was forced to install pppoeconf to get my DSL internet connection going, after that even if i've uninstall pppoeconf and now i'm only using nm-applet to monitor my internet connection i still have this annoying problem.
View 1 Replies View RelatedI have a computer with 16GB of ram. At the moment, top shows all the RAM is taken, (NOT by cache), but the RAM used by the various processes is very far from 16GB.I have seen this problem several times, but I don't understand what is happening.My only remedy so far has been to reboot the machine.
View 1 Replies View RelatedI get this error when I run "sudo apt-get install python-software-properties"
Preconfiguring packages ...
dpkg: unrecoverable fatal error, aborting:
fork failed: Cannot allocate memory
E: Sub-process /usr/bin/dpkg returned an error code (2)
I'm trying to install deluge via ssh and my vps has 512mb ram and is only using 11% of it prior to running the code.
I have had a fresh install of Ubuntu 9.10 and installed some software after that.Since third some, some process is eating half of my memory.I have checked processes running in system manager but everything is normal.Maximum is consumed by compiz which is about 26 mb, seems very normal.I did restarted my computer several times, and in the start for 5 mins, its fine after that again my cpu fans runs at very fast speed and my one cpu is used up 95 % (I have dual core).Please help me out, this invisible thing is driving me crazy.I am attaching my htop screen shot (sorted by cpu %), now the cpu is not used by completely but fan is still struggling hard and fast.
View 9 Replies View Related