I have a kernel module that I am trying to do a phys_to_virt() and then perform a memcpy to the returned virtual address. The problem that I'm having is that my module crashes during the memcpy when trying to do this on a physical address in high memory. Do I need to need to perform some kind a mapping operation on high memory address for this to work?
My problem seems to be very simple, it's high memory usage. I occasionally will use movie player to watch a few shows and I use firefox as well. My memory usage starts out real small about 500 mb but after using firefox lightly and movie player it jumps to almost 2 gigs and this is after they've been closed what gives? I've attached an image so you can see what I'm talking about.
Most of my many Linux installs boot up to a memory use of 170-190 MB with no open or running programs. But with Ubuntu 9.10 it shows; 305-310 MB and the top RAM use item is the; compiz.real, which I think is a display desktop effects application. I have the nvidia video card driver installed, and that nvidia X-Server configuration tool. Also I can not find where that compiz options is at, ( the display options in system, preferances does not open, and says to use my video card tool ).
I've installed my debian sid about one month ago (first xfce, next gnome) but noticed that it's kind of really slow. The upgrades take ages, launching (and using) firefox takes so much time,... In comparaison to my ubuntu, archlinux (on the same computer) or previous installation of debian there is clearly a problem somewhere.Today I tried to do a "top" sorted by mem usage : 3.5% xulrunner-stub, 2.1% dropbox, 1.4% aptitude (doing upgrade), 1.4% clementine,... nothing terriblebut still I've 2.7Gb or RAM used (more than 50%)
$ free -m total used free shared buffers cached Mem: 3967 26851282 0 79 1938
I have a server running samba process and there are about 70 samba users connected at a time. The system has 4Gb of memory and it seems each samba process is utilizing only 3352Kb of memory. When I run the command pmap -d (pid of samba)
But when I run the top command, it results as below: Tasks: 163 total, 1 running, 162 sleeping, 0 stopped, 0 zombie Cpu(s): 0.9% us, 4.9% sy, 0.0% ni, 93.3% id, 0.8% wa, 0.2% hi, 0.0% si Mem: 3895444k total, 3163192k used, 732252k free, 352344k buffers Swap: 2097144k total, 208k used, 2096936k free, 2487636k cached
Why could the system be utilizing such high memory? By the way, the server is not running other processes. The samba version running in it is 3.0.33-0.17.
I upgraded from Fedora 13 to 14 over the network. Everything seems to have worked. The one problem after my install is that I have noticed that setroubleshootd consumes alot of memory.
[Code]
It doesn't take long for setroubleshootd to jump in memory usage. I can kill the process but it will start up again. I have tried disabling the service but it doesn't show up in /etc/init.d. # service setroubledshootd stop setroubledshootd: unrecognized service So I am not sure what I can do to resolve the issue with setroubleshootd besides killing it off every 15 minutes.
In our database, when checking the memory used using top command, we are always seeing 32 GB RAM utilized. We have set the sga_max_size to 8gb and PGA to 3 gb. We have tried shutting down oracle db and then the memory went down to 24GB when checked using top command. After cold reboot of the DB server, it gone down to 1.5 GB.
But once the users are started using after end of day, the memory again gone back to 32 GB.
i am having a problem that i would call a bit "important" with my server. so, from last 3 weeks the used space of my hard disk (RAID I) started growing up. i have 2 x 1 tb HDD working on RAID I and i did not install anything those weeks. the space just started changing from 90 GB till 580 GB. now the situation is stable there but i think it's not normal.
the bandwidth usage is low (like 120 gb in 2 months) and i am running 6 counter strike gameservers, a forum, a very little website and some local stuffs... a friend of mine told me that my server could have been hacked but i am afraid it did... some useful informations: when i reboot the server the used space goes down again to ~100 GB and then it starts going up again. i cant really find where all those files are located:
I have an Ubuntu server running in our small office. Among its many duties is report generation. It uses PHP and DOMPDF (a PHP library for converting HTML/CSS to PDFs for printing). PHP's default memory limit of 32MB is not even close to being enough to pull large amounts of data from the database and generate images/tables/PDFs with that data.
I increased the memory limit to 64MB and that is adequate for reports under 3 pages or so (varies based on table complexity, images, etc). If any user tries to generate a report longer than that, PHP just throws a "out of memory" error and doesn't make the report.
My question is: what are the possible consequences of increasing the memory limit yet again to 128MB or maybe even higher? The server isn't terribly powerful. It has 2GB RAM and 4GB swap space. I know that isn't much but this is a small office and at most I can only see two or three people trying to run reports at the same time. As for security, apache is currently only serving pages in the local network, but sometime within the next year I'll probably have it hosting a public website (currently using a hosting service). Is a high memory limit a potential security risk when exposed to the internet?
EDIT: Sorry, PHP's default memory limit is 16 not 32 as I said. Question still stands, however
Basically I have a machine with 16GB of RAM and have just discovered that using all of it can crash the whole system over one process. How could I run a process on the system in such a way that if more than 90% of system memory is used, the process immediately crashes?
I am having a php script which is used for bulk mailing. I run the script every minute through cron job. I have mentioned the path of the php script in a .sh file and execute the .sh file through cron job. Every time i run the script it utilizes high memory which results to server crash. how to restrict memory usage for that process to be a minimum one or how to set priority to be low for that process so that it is executed when there is no high priority jobs, so that the server runs normally without going down when the script runs
I recently upgraded to Ubuntu 10.10 and I am experiencing an ultra high memory usage of the gnome-settings-daemon of 2GB after suspend! Killing and restarting the daemon solves the issue. Anybody else with this behavior?
I was browsing my folder with lots of images, after finished i close nautilus and i notice that my computer became slow, so i'll check it with system monitor and had found that nautilus are using almost 100mb of ram (opening 4 tabs). I'm not sure if this was normal or not because i try to reopen the same folder with pcmanfm and it only consumes less than 20mb of ram (opening 4 tabs). here's the screenshot from system monitor .
I am new to C and linux. My code below does arbitary writes but I cant figure out where or how it does it.
I am calling the insertNode() function with seq = 'MISSISSPPI$' and alphabets = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ$'
Code:
Weird behaviour I should mention is that when I check for NULL pointer in node->child[index], the unassigned values are not null anymore, they point to arbitary memory.
Is that possible that SHM shared memory is counted as cache memory on Linux with kernel 2.6.18?If find it really odd since this memory is not file backed, but I have a piece of code that loads data using shm_open+mmap, and it generates an amount of cache memory in /proc/meminfo that corresponds exactly to the amount of shared memory (I load that data from a file but I am using posix_fadvise(fd,0,0,POSIX_FADV_DONTNEED) to ensure this file is not cached and I made sure that it is working as expected). As far as I know SHM memory was not tagged as cache memory with kernel 2.6.9.If it is the case it is really unfortunate since normally cache memory can be considered to be part of the "available" memory since it can be flushed promptly but this is clearly not the case with SHM memory... Is there an easy way to get the total amount of used SHM memory on a system?
I am developing a GPS driver which is connected over a high speed UART. The driver of the high speed UART is available and it creates a device node /dev/ttyHS0. Now using the device node, I guess, I will be able to read/write from/to the GPS chip from user space but my aim is to write a GPS driver in the kernel space which would communicate with the high speed UART driver. After some initial study, I think I can either write a line discipline driver or a serio to communicate with the GPS chip firmware through high speed UART driver.
Most kernels are written in low level programming languages such as C and Assembly. Would it be possible to write a kernel in a high level language such as Python? Many high-level languages are themselves written in C.
I'm currently developping a program in C++, using Qt, for an embedded board (SBC9261).It works well but crashes after some time, due to a system memory overload (my program uses more and more memory until 100%, when it crashes).I've been able to figure out the source of the memory leak :The function f is called by my program every second. f instanciates a new object (a QImage from the Qt lib), does a bunch of processing on it, and returns it to the calling function :
Code: QImage *MyClass::f(QString filename) {// Open image QImage *image = new QImage(filename);
I have a general question regarding memory errors. I frequently ran into memory errors such as seg fault, double free, etc. Sometimes I got the following traces for example.
I have 2 applications that send and receive messages through shared memory IPC. When I run the app ..it works but the number of messages per sec keeps changing drastically sometimes it is 400-500 per sec..then 800 then 1200 then 2000. is this normal with SHM IPC or could it be a code related issue.
I have faced a problem with my code (a small tcp server). After the thread returns, the memory not decreases, but when a new connection is made, memory not increases and the new connection initialize a new thread with a same thread id of the previous thread. When two connections are made at same time, two thread ids are created and when the respective threads returns the memory not decreases. The Valgrind indicates that not memory leak occurs, the pointers are released. Indeed, more memory are allocate when new thread id is created. i used gcc and debian.
I have been assigned a school project on detecting memory leaks in linux processes. I am reading.. but have found it hard and inefficient to go through the very vast documentation not knowing what to really look for. Could you please give me some guidelines on this subject?
In words, AND the byte at memory location 45 with immediate value 03. As reports "Ambiguous operand size for and". How could I code the instruction such that as understands my intention?
john: .byte 45 and byte[john],03 gives the same error.
I need to write a small program that eats away at availabe memory. I need to creaty a memory leak to test how other programs cope. I need to run this program on linux and see if the available memory is decreasing.
So I have done: Code: int main() { int *buffer; while(1)