Ubuntu :: Memory Taken By .So File
Jun 24, 2010Is there any way to find that in a process how much memory has been allocated by a .so file dynamically.
View 3 RepliesIs there any way to find that in a process how much memory has been allocated by a .so file dynamically.
View 3 RepliesI need to read many files very fast: reading them from the disk leads to bad performance!!I copied the files into /dev/shm, being sure that they were copied in memory, but the performance didn't improve.Then I created a tmp file system in /mnt (/mnt/tmpfs) and I mounted it withmount -osize=400m tmpfs /mnt/tmpfs -t tmpfsand copied the file in. But the performance still remain almost the same.I've the doubt that I didn't copied the file in memory!The question is: Did I make the right things?I run a FC 11 64 bit on a dual procs server with 16 Gb memory
View 1 Replies View Relatedwhen I delete a running executable or script, it usually (for me, pretty much always, but I don't know if it will work in every case) continues to run without any problems. So I've got two questions here: Where is the running executable/script being run from? RAM memory? If stored in RAM or where ever, is there a way to extract the executable/script from that location? If it makes any difference, I'm using Ubuntu 11.04.
View 1 Replies View RelatedI have tried to write my files to 2 (2GB each) USB sticks and both turned into a Read-Only-File-System. Then I tried with my Memory Stick Duo and it also turned into that. So I give my last try with my SanDisk HC Memory Card (from photo camera) and resulted the same. I can copy the files of those portable storages into my HDD fine but not write into them or delete any files inside those portable storages.
View 9 Replies View RelatedUsing 10.04 Netbook version. I am finding on my Asus EEE 901 that sometimes file copy just seems to freeze - seems to happen usually when copying from the built-in SSD memory to the plug-in SDHC memory card. I have tried reformatting the card and using a different card. It is not just this computer since I found the same thing on my last Asus which was the 900 model.
I am told that there are issues with Nautilus. Is there anything which can be done to improve this or is there anything else which I can install besides Nautilus? I am assuming that there is some issue related to Ubuntu's handling of SDHC memory cards.
It is becoming annoying because it seems to work sometimes and then not. When it happens only option seems to be to turn the netbook off and on again. Even if the file copy is cancelled the card seems to be unaccesible until rebooted.
Also after a certain point it seems that when I try and copy new files to the card, they appear to copy ok but obviously are corrupt in some way - when you try to play videos for instance they are faulty.
i am running Ubuntu Lucid x64 as a fileserver that shares its files via SFTP, NFS and Samba. Currently the hard disks are configured to go to standby if they are not needed. This works perfectly as long as no one browses the shares or my HTPC is running: That one repeatedly looks through the shares for new music or movies. In other words my problem is that the disks are spinning up a lot more often than they should have to. Additionally the spin-up time delays the response time while browsing. Since the machine has a lot of unused RAM i want to tell the kernel that it should keep the directory structure in memory. That way the disks would not need to spin up every time someone browses through the directories.
View 2 Replies View RelatedI have worked in linux for a long time but never managed the system until I got my own server, which is running Fedora 14. I have a 3 TB Drive and apparently can only handle 2 TB. At least the Disk Analyzer is telling me that 2TB is 100% max capacity. Also viewing disk analyzer, I am only using 50GB of my 2TB but I am out of memory in the Root file system. If I run df -h, I get he following:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_dev1-lv_root
50G 40G 7.2G 85% /
[code]....
I have a computer with 16GB of ram. At the moment, top shows all the RAM is taken, (NOT by cache), but the RAM used by the various processes is very far from 16GB.I have seen this problem several times, but I don't understand what is happening.My only remedy so far has been to reboot the machine.
View 1 Replies View Relatedhow to calculate (if possible) the end address of an image file in a flash memory. I'm trying to create a checksum and checkheader function and the info that I got is the file's offset, how many sector it consumes and its size. I kinda need the end address, sad thing is, I don't know how to calculate it.test.img's start address is 0, the size is 0x20000 and consumes 3 sectors.
View 6 Replies View RelatedI get this error when I run "sudo apt-get install python-software-properties"
Preconfiguring packages ...
dpkg: unrecoverable fatal error, aborting:
fork failed: Cannot allocate memory
E: Sub-process /usr/bin/dpkg returned an error code (2)
I'm trying to install deluge via ssh and my vps has 512mb ram and is only using 11% of it prior to running the code.
I have had a fresh install of Ubuntu 9.10 and installed some software after that.Since third some, some process is eating half of my memory.I have checked processes running in system manager but everything is normal.Maximum is consumed by compiz which is about 26 mb, seems very normal.I did restarted my computer several times, and in the start for 5 mins, its fine after that again my cpu fans runs at very fast speed and my one cpu is used up 95 % (I have dual core).Please help me out, this invisible thing is driving me crazy.I am attaching my htop screen shot (sorted by cpu %), now the cpu is not used by completely but fan is still struggling hard and fast.
View 9 Replies View RelatedTo get the kernel messages of new java process, i refer the details from /proc/<java pid>/stat and /proc/<java pid>/statm files. For some java processes, I didn't find any details in the /proc/<java pid>/statm file. It has only 7 number of 0s. But /proc/<java pid>/stat file has the details. And also this kind of process will have the life time of nearly 1 minute.
Kernel version using: Linux-2.6.18-8.1.8.el5 Is there any possibility of java process without the memory details in the /proc/<java pid>/statm file? If it is possible, how to know the memory related details of that processes?
I am having a php script which is used for bulk mailing. I run the script every minute through cron job. I have mentioned the path of the php script in a .sh file and execute the .sh file through cron job. Every time i run the script it utilizes high memory which results to server crash. how to restrict memory usage for that process to be a minimum one or how to set priority to be low for that process so that it is executed when there is no high priority jobs, so that the server runs normally without going down when the script runs
View 3 Replies View RelatedI am using malloc and frees a lot in my program. It shows its allocated but when i remove it doesnt show as the memory is removed(I am using the top command to view VIRT memory usage). If this continously grows what would happen to my program (Will it go out of memory?)
View 4 Replies View RelatedFedora 14 xfce
HP Mini 210
I am looking to buy some memory for my netbook. Currently I have 1 GB of DDR3 memory. However, the specification says that 2 GB of memory is the max. However, when I do the following it says that 4GB is the max:
[Code].....
let me know how to clear cache memory ( RHEL 5.1 ) as it consumes almost 100% physical memory.
View 3 Replies View RelatedI am looking for free database that has low memory usage and innodb and memory like engins that has C API and support trigger and client/server support for using in embedded linux systems.
View 8 Replies View RelatedI am new to C and linux. My code below does arbitary writes but I cant figure out where or how it does it.
I am calling the insertNode() function with seq = 'MISSISSPPI$' and alphabets = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ$'
Code:
Weird behaviour I should mention is that when I check for NULL pointer in node->child[index], the unassigned values are not null anymore, they point to arbitary memory.
we found that if we use 'top' to show the memory usage of a server (SuSe Linux 10), we can get virtual memory usage as well as 'Resident memory' usage. For virtual mem or a particular process, it is around 1.1GB, which is large but for resident memory, it only consumes 300MB. Are there anyone who knows what the differences are? I would also like to know whether the difference (1.1GB - 300MB) = 800MB are actually available for use by other applications in the system.
View 1 Replies View RelatedHow do I write a script for my Linux that can show me total memory vs used memory and have it email me results if it's over 70 percent?
View 2 Replies View RelatedI am monitoring physical memory in a server I administer, and my hardware provider told me they had increased physical memory size to 4Gb... However, using several tools (free -m; top; dmesg | grep Memory; grep MemTotal /proc/meminfo I discovered that I actually have 3Gb, not 4... But, my doubt comes from the fact that dmesg | grem Memory tells me I have 3103396k/4194304k available The first number is effectively 3Gb, but the second one, is 4! so, why I am looking at this two different numbers?
View 1 Replies View RelatedI am using Ubuntu and looking for a good editor to edit a file that is > 4GB. I just need to put content at the end and beginning of the file. I suppose I could use something like
cat "text to add" >> huge_file
To append to the file. Is that the route to go? What about prepending? In general, what is the best route if I wanted to edit somewhere in the middle?
I've tried VIM and it fails miserably. I assume emacs and nano would be even worse. What else is there? I assume to accomplish what I am looking for, the editor would have to be specifically designed for this by not keeping the entirety of the file's contents in memory.
I am writing an application that wants to access periphals registers outside the standard (allowed) memory area.
Doing so gets me "segmentation fault".
I know, this is natural behaviour.
One way of getting around this is writing the module which has to be loaded by linux. I will consider this some time later.
For now, I want to come to some quick result and allow linux or gcc compiler to write to those memory areas of periphals. Is there a direct way to do so?
Is that possible that SHM shared memory is counted as cache memory on Linux with kernel 2.6.18?If find it really odd since this memory is not file backed, but I have a piece of code that loads data using shm_open+mmap, and it generates an amount of cache memory in /proc/meminfo that corresponds exactly to the amount of shared memory (I load that data from a file but I am using posix_fadvise(fd,0,0,POSIX_FADV_DONTNEED) to ensure this file is not cached and I made sure that it is working as expected). As far as I know SHM memory was not tagged as cache memory with kernel 2.6.9.If it is the case it is really unfortunate since normally cache memory can be considered to be part of the "available" memory since it can be flushed promptly but this is clearly not the case with SHM memory... Is there an easy way to get the total amount of used SHM memory on a system?
View 4 Replies View RelatedThis is my first post in these forums. I'm still quite new to Linux (using Mint 9) so please bear with my not-very-articulate question(s)When I boot up and open up a tty terminal I get a message saying "Memory corruption detected in low memory." I've done an extensive google search about the issue and it seems not uncommon. I ran a memtest with no errors returned, so I'm sure that there's nothing really wrong with the memory; apparently it's a bug in the kernel that's causing this.
View 2 Replies View RelatedI found from command 'top' that 8GB memory are used. However, using command 'ps' with some options to grep the running processes and then summing up the memory used by the running processes are less than 2 GB. Where has the used memory gone ?
View 7 Replies View RelatedI have been setting up a vps I got out with bhost.net, with CentOS installed. I've been learning and have set up everying I need with the exception of ftp/sftp.
Using yum I installed vsftpd and ran into problems, thinking it was something I might of done I did a fresh install of CentOS and I still recieve the same problem on a fresh install so it is nothing I have done to the server.
The problem is when connecting via a sftp client I get an out of memory error. This error is listed in the putty faq ( url ) under A.7.5, there is a brief explaintion of the cure under A.7.6.
there is mention of a login script but I don't know where this is located. I'm a novice at Linux but by no means incompotent when it comes to computing.
RAM for older machines like I use is fairly cheap these days. But flash memory is just as cheap or cheaper. So I'd like to ask about the feasibility of expanding my system's memory using flash memory. And about whether creating a partition for swap on the flash memory, or whether a swap file on the flash device, is the better way to go.
By flash memory I have in mind mainly USB sticks or what are sometimes called "pen drives." But I do also have CF and SD cards that, with the proper cheap adapter (one of which I already own for adapting CF) could be used to create extra swap space. So, what is the current consensus on the feasibility/advisability of using flash memory for swap? I've read about the limited write cycles of flash being an argument against using it for swap. But recent reading indicates to me that the limited write cycles problem applies mostly to older, smaller-capacity flash memory. Some will come out and say that, for larger-capacity flash memory, the life of the device is likely to exceed the amount of time your current computer will be useful (I think I've seen estimates in the range of 3-4 years life--minimum--for newer, higher-capacity flash memory).
A more persuasive argument I've heard against using flash memory for swap is that access times for these devices can be much slower than SATA, and maybe even IDE, hard drives. That would certainly dictate against using flash memory for swap.
So, how about some input on this issue? Anyone using flash memory for swap? If so, what kind (e.g., usb stick or SD/CF)? Are you using a swap file or a swap partition? How's system performance? Likewise, has anyone had flash-memory-used-as-swap die on them? The consequences would undoubtedly be dire. Also, has anyone measured flash memory access times to confirm or refute claims about slow access times? Are some types of flash memory better/worse than others in terms of access times?
I have a query regarding top & virtual memory. When we run top it show VIRT (Virtual Mem), RES (Resident Mem) & SHR (Shared Memory). The total virtual memory of my machine is 4 GBs (2 GB RAM + 2 GB Swap), but still I am able to see a process showing 4000m virtual memory column. what it means, as its show VIRT Mem more than actual available VIRT memory
View 1 Replies View RelatedWhen I start bluej and try to open files from my memory stick the memory stick is not available. Is there any way that I can open files directly in bluej from my memory stick.
View 3 Replies View Related