Programming :: Find Out - How To Detach And Delete Shared Memory Segment When Hit Crtl-C
Mar 4, 2011
I have a program that creates and uses a shared memory segment. I am trying to find out how to detach and delete this shared memory segment when I hit crtl-C, and I still need the process to terminate.shmdt() and shmctl() have variables that are local to the main passed to them(shared and shmid)
Code:
//Prototype
void leave(int sig);
//part of code trying to use signal handling
if(signal(SIGINT, leave))
Our application uses a dynamically loaded shared object library (codec library) to compress and decompress audio streams.
There happens to be several static and global variables in this shared object library. Hence it is not possible to process two interleaved unrelated media streams using this shared object codec library because each stream corrupts/changes the contents of these static/global variables.
Is there a way through which a context save (save contents of data segment of shared object) and a context load (load previously saved contents of data segment of shared object)operation can be performed on the shared object library. This way the context for each media stream can be saved and loaded before and after processing the "other" media stream respectively.
I want to create a "Shared Memory" in linux, then create multiple "Shared Objects" that can access to a Table for example; And one of them can write something into the Table and the other can access and read it, so that these operations can be handled by programmer! I'm using Ubuntu 9.04 and I've set it's runlevel at 3 (I have commandline environment now!) I've searched the Internet so much, but couldn't find a good sample code for this! I have no experience about it and need your help to introduce me a sample code about it and advise me how to compile and use it with "GCC"?!
I have 2 applications that send and receive messages through shared memory IPC. When I run the app ..it works but the number of messages per sec keeps changing drastically sometimes it is 400-500 per sec..then 800 then 1200 then 2000. is this normal with SHM IPC or could it be a code related issue.
Any good tutorial on sharing dynamically allocated objects across shared libraries in the same process and between shared libaries and main(). In particular, I need to know what creation and destruction sequences are valid when libraries are being loaded and unloaded. For example, is it valid to allocate an object from inside a shared library procedure, and then delete that pointer from a different module, especially in the case where the allocating module has already been unloaded.
I imagine there might be all kinds of problems with this. Although my preliminary tests seem to work most of the time, I get crashes from time to time, but I'm not sure if they're caused by memory management or by threading issues. I've been restructuring my code to use a global context object to manage object creation and destruction from main(), but I'd like to find a clear exposition of the specific issues I'm dealing with before I go too much further.
I'm writing a producer-consumer program, where the producer and the consumer are different processes and they communicate using queued signals, and when I run it it comes out always 'segmentation fault'.
Here is my code:
(note: I tried using both 'shm_open()' and 'mmap()', and 'shmget()' and shmat()')
So I'll try to be brief and to the point here: I've got a couple of C / C++ apps that communicate with one another via shared memory. These worked completely fine until.. well, about twenty minutes ago when I finished making some network card changes and suddenly, I've got a weird problem going on. At one point, Parent app waits for Child app to set a boolean indicating it's finished initialization. This worked fine the last time I ran this app up (a few days ago). But right now, the shared flag never seems to get triggered (I've added a printf("Waiting..."); in Parent app until the flag is set). All the code leading up to it being set in Child app seem to be running smoothly, so I tried spitting out the addresses of the shared memory locations. The addresses mapped by Parent app and Child app are different; this seemed odd, so I went back and wrote a simple miniature app that just opened a shared structure on my own box, and I get the same thing - different addresses - but the miniature apps work just fine.
Is it normal for a shared memory space to be mapped to two different addresses across two processes?
If so, does anyone have any idea what might be the issue at hand with my Parent / Child app scenario? The Child creates the shared memory, the Parent has a wait before it opens it, and if it doesn't exist should fail (opening with PROT_READ | PROT_WRITE)... it doesn't fail so it's evidently there.
All of this worked until literally just a few hours ago and I made some changes to my network cards, and I can't even imagine how that could have changed whether or not shared memory mapping worked...
I have what should be a relatively simple program (fadec.c) that maps a struct from an included header file (fadec.h) to a shared memory region, but Im struggling accessing members in the struct from the pointer returned by shmat. Ultimately, I want to access members in the shared memory structure with a globally declared version of the struct, shm__. Not only do I not know how to accomplish that, but I cant even seem to access members of the shared struct directly from the pointer itself (compiler complains about dereferencing pointer to incomplete type). Im at a loss and could use another set of eyes if you guys dont mind taking a gander:
Compile Errors: tony-pc:/cygdrive/p/test> cc -o fadec fadec.c fadec.c: In function 'main': fadec.c:30: error: dereferencing pointer to incomplete type fadec.c:31: error: dereferencing pointer to incomplete type
I've got this line find . -type d -name 'elements' -exec rm -rf {} ; put together to find and delete all directories named elements and their contents.
It does work but whenever I run it I get the error/warning Code: find: `./dir3/elements': No such file or directory find: `./dir6/elements': No such file or directory
I allocated a chunk of memory using kmalloc in a Device Driver. Kmalloc provides a pointer to the allocated memory. This is one of my first few drivers.
I assume that the address returned is a Virtual address. I need to find the physical address of the memory location. I am working on an Intel 64 bit Fedora machine. I used the virt_to_phys() routine present in <asm/io_64.h>. I found that this routine returns an unsigned long value (32 bit) instead of an unsigned long long value (64 bit). Moreover, it seems that it simply returns the address - OFFSET instead of extracting the value in the page tables.
So is there any function / system call in Linux which will allow me to see the actual physical address on the Intel 64 arch.
I am trying to troubleshoot an Apache/Mysql server that once every few days falls over due to memory starvation. I thought I had tuned my httpd.conf file with relativley small MaxClients and so on, but then noticed some unusual RES values in top while monitoring.
In short, `top -b -n1 -u apache` shows that I have 28 httpd processes. their VIRT is ~300MB for each process (I understand this is shared), and the RES ~50MB for each. I thought this wasn't shared. Is this true? I just noticed 2 of the processes jump to 1.2g. If RES represents non-shared memory then concievably 1.2g x 28 processes would be a problem on an 8GB server.
I am new to C and linux. My code below does arbitary writes but I cant figure out where or how it does it.
I am calling the insertNode() function with seq = 'MISSISSPPI$' and alphabets = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ$'
Code:
Weird behaviour I should mention is that when I check for NULL pointer in node->child[index], the unassigned values are not null anymore, they point to arbitary memory.
When I boot into Linux i see some shared memory which is blocked, due to this I am unable to start my processes which needs some shared memory.I tried removing the shared memory using ipcrm -m but no use, the shared memory is not cleaned but its still in use.I tried removing the shared memory using root user also normal user but unable to remove the shared memory.
I have to figure out how to pass data between a Fortran application and a C++ application. The memory has to be locked/unlocked to prevent data corruption. I know very little about Fortran. I have done mutex lock/unlock stuff in Linux apps, multi-threaded, using mutex locks before.
Digging around on the Web, I do not find any info on a Fortran app including linux headers, library calls such as those in ipc.h, shm.h and sem.h. What is an approach to this problem?
When the systen is running there is a program/utility/kernel-driver? kthreadd in system monitor all the following programs are a child process of kthreadd. Is kthreadd part of elevator=cfq and can kthreadd be influenced by params. kthreads uses no cpu time memory or shared memory so it must be in kernel. is there any documentation on this subject? System 11.3 on quad processor 8 gb mem nvidia-card and driver.
How do I change the size of the available shared memory on Linux?evidently 4GB is not enough for what I am doing (I need to load a lot of data into shared memory - my machine got 8GB of RAM).
Is that possible that SHM shared memory is counted as cache memory on Linux with kernel 2.6.18?If find it really odd since this memory is not file backed, but I have a piece of code that loads data using shm_open+mmap, and it generates an amount of cache memory in /proc/meminfo that corresponds exactly to the amount of shared memory (I load that data from a file but I am using posix_fadvise(fd,0,0,POSIX_FADV_DONTNEED) to ensure this file is not cached and I made sure that it is working as expected). As far as I know SHM memory was not tagged as cache memory with kernel 2.6.9.If it is the case it is really unfortunate since normally cache memory can be considered to be part of the "available" memory since it can be flushed promptly but this is clearly not the case with SHM memory... Is there an easy way to get the total amount of used SHM memory on a system?
I am having the same problem. I did delete partition trying to create a new shared fat32 one.below fdisk -l screen..I booted from live CD and tried quite a few things already...I think I need a clear direstions for it is getting annoying...
Device Boot Start End Blocks Id System /dev/sda1 1 5906 47437866+ 7 HPFS/NTFS Partition 1 does not end on cylinder boundary.
My problem is I installed Zone Minder for camera security and I'm testing it on my laptop with the built in webcam and everything seems to work perfectly except when I try to view the live feed from the camera, it's just a black box. No video.
I checked this website and it's exactly the problem I'm having with a fix for it but his fix doesn't work. He says to type:
Code: user@ubuntu:~$ sudo echo "256000000" > /proc/sys/kernel/shmmax user@ubuntu:~$ sudo service apache2 restart user@ubuntu:~$ sudo service zoneminder restart