In my program, I fork() to get a child process. Because of some problem, child process terminates by a segmentation fault. Parent process is still running. I have compiled my code with -g option. I have done: ulimit -c unlimited. I am not getting core dump of the child process. How can I get the core dump of child process?
I want to generate core dump files from my program when it crashes. Its a pretty big process and has about 10-11 threads in it.I have followed the documentation to enable core dump by setting ulimit to unlimited etc. I quickly tried "A demo program creating a core dump" from the following webpage, which succeeds in Segfault and dumping a core file in the directory that I configured.However, I tried running my original program and caused it to crash. I did this by making calls to kill(), raise() or the same null pointer access as shown in the webpage above. In each case, my program crashed but did not generate a core dump file. Am I missing something?My program is in C++ and my environment is Redhat 9.0 (kernel 2.4.20)
Going through the "Why do I NOT get a core dump?" section on the same webpage as above, I can see two potential problems. One - there are issues with the suid/sgid (bullet # 6). I am not able to change any settings with suid because my system does not contain either /proc/sys/fs/suid_dumpable or /proc/sys/kernel/suid_dumpableTwo, my program has threads in it and the bullet # 8 is the problem.
I was wondering if 7735, 7736, 7737, 7743 were really processes. Then I checked /proc, I could cd to /proc/7735, /proc/7736, etc, but I could not ls them out. I looked at the man page of "pstree", it says,
Code:
Child threads of a process are found under the parent process and are shown with the process name in curly braces, e.g.
icecast2---13*[{icecast2}]
So, what does all this mean? Does it mean that 7735, 7736, 7737, 7743 are just threads but not processes? If so, why could I cd to /proc/<id> but not see them in "ps -elf".
Parent: chid_pid=4356 i=0 parent's pid=4355 This is child 4356 i=0 This is child 4357 i=1
[code]....
I can observe instead of two children(as I expect) processes there are three. This is because child process 4356 creates its own child. Why all the messages of the type "This is child X i=Y" are concentrated one under another? How exactly fork works? Is affected by the fact that I have a dual-core processor?
Description of what the code does or what i intended to do:
1. Created a child process from parent process using 'fork()'
2. Sent a signal 'SIGALRM' from child process to parent process using 'sigqueue' function.
(The Third parameter of 'siqueue' function contains the message (message msg) which the child process wants to send to the parent process.'msg' is a stucture instance containing a) pid of child and b) string) 5. Print the 'msg' sent by child process inside the signal handler function 'sig_action_function' of the parent process I am getting some junk value when this line is executed
Code:
printf("%d ",msg->cpid);
I expected to get the pid of child process, which the child process sent to parent process through the signal.
I have a doubt about signals in C programming. I have done this little program to explain it. It creates a child process with fork and, when the child ends, receives the SIGCHLD signal and wait for its termination.Ok, quite easy, BUT when I execute this code the SIGCHLD signal is received twice, first as an error (returns -1) and the second one to finish the child process.I don't understand the meaning of the first received signal. Why is it generated? Is the code wrong? (if you add the SIGINT and press Ctrl+C during the execution it also receives two signals instead of one)
i just touch linux, may i know how can i convert the core dump file to a readable textfile, which include all the information, which is in core dump, such as all variables, threads information, call trace for each tasks, and so on. i know use the GDB can view this, but it won't dump all the informations to one text file. but sometimes, people want to view the core dump reason without Linux environment.
I want to kill parent process after "fork()" method. but if I kill parent process with "exit(0)" method, main() thread is terminated as well so child prosess doesn't work anymore. Is there any way to kill only parent process without affecting to child process?
I have created three child process from one parent. And different child has different functions. Child 2 has got function to load file called "wc" to count file1 and and its required to get their files by command line arguments. I can get the files through command line but couldn't get the files when child 2 process start.
I have a script that calls other scripts/commands which may or may not spawn other process. From my understanding, when I do a ps -ef, the highest numbered process ID is supposed to be the parent ID of all the other related child processes, is this correct? In most or all circumstances, I do a ps -ef | grep <processid> of my script and anything that spawns off that process IDs I assumed are the child processes of my script. If I want to terminate my script and all other child processes, then I kill the parent ID which is the highest numbered PID and this will subsequently kill all other child process IDs, is this correct?
Now, my question is whether there is any quick way of showing what are the child processes of a parent ID instead of what am currently doing now which is visually checking which one is the parent ID and "assuming" that the highest numbered PID is the parent ID of all the other processes. Below is a sample output of running ps -ef | grep exp | grep -v grep. I assume from the output below that the parent process/ID is PID 11322, is that correct?
I wonder how one can, if at all, run an X program in the background *in an emulator sub-shell process*. What I mean is to launch a program in an emulator, e.g, by xterm -e gedit
but with gedit (in this example) running in the background from inside the xterm sub-process, so that the xterm will accept other commands. In the above, gedit will run in the foreground, and of course, if you do xterm -e gedit &
then xterm will run in the background, not gedit.In short, I would like to achieve the same thing as "gedit &" as you manually do in xterm, but from another shell. What I aim to do is write an X init script to achieve this result (to have the emulator open and a program or two running from it, in the background, at the X startup).
I'm studying about signal in Linux Kernel and I got a problem about signal handler and output buffer.
I just want to know about stdout buffer related parent process and child process.
The problem is - parent process received SIGINT signal_handler that I implement is called. And after signal_handler is called, it print string "pid : xxx state : RUNNING" ... but after end of signal_handler function, child process might be print string but it isn't print at all.
I'm not asking right code, but I want to know why is this happened and concepts about signal handler, buffer - between parent process and child process.
I am facing an issue where the process starts hanging. When I closely look at the logs I come to know that some of the child processes that are forked by the parent process are not finished.
1) Is it possible that the child processes that are not finished occupy the socket memory of the parent process and ultimately a point is reached where no socket memory is available to fork new child processes.
2) What is the standard limit of socket memory in linux?
3) What is the fate of such child processes (as I have mentioned above)?
4) How to debug such cases so that the exact problematic area is identified?
I am using RHEL 4.7 (32bit) on HP Proliant 380G6 series server. We are using Electric Cloud Agents on these servers. Nowadays we are facing some memory issues and its creates some kernel panic and then restarts the server. When i reported the issue to my application team, they asked me to come with the core dump. I googled it enough, then i set ulimit value as unlimited. (previously it was 0, then i made a entry in /etc/profile file as follows ulimit -c unlimited) But still whenever my server restarts due to that kernel panic, it couldnt generate the core dump. My application was installed on /opt
I am developing an application whose executable is generated inside a certain folder hierarchy (say: /DevPath/MyProject/bin). My source code is located in a different branch of this hierarchy (say: /DevPath/MyProject/src). When my app crashes, its core files are stored in /DevPath/MyProject. I'm developing the app on a pc, but running it on a separate platform in which i can only execute it. Folder hierarchy is the same as above on both computers. Usually, when a new executable version is ready, we update both the executable and the source code on the target platform, transferring all the new /DevPath/MyProject folder on it. But sometimes it can really be a bother, so we update only the executable.
1)In the case we only update the executable, keeping an old source code version, and the app generates a core file, can i trust the backtrace produced by gdb in this case? I.e., does gdb need the latest source code files or it just needs the debugging information?
2) (More radical question ) Do i really need to keep the source code on the target platform for core dump analysis or i just need the executable?
In one of our core dump we have the followings in the core back trace:
#0 0xb77bf947 in raise () from /lib/tls/libc.so.6 #1 0xb77c10c9 in abort () from /lib/tls/libc.so.6 #2 0xb77f56ba in __fsetlocking () from /lib/tls/libc.so.6 #3 0xb77fcf7f in mallopt () from /lib/tls/libc.so.6 #4 0xb77fd022 in free () from /lib/tls/libc.so.6
It occurred in a memory block free operation. From our analysis, there seems no issue relate the the memory block it self. The memory pointer pointed to the right memory block to be freed and the contents of the memory seems right (not corrupted), in one world, there is nothing obviously wrong. Does any one have any ideas what could be wrong when seeing about?
To analyse a coredump, I need to specify program name/path in GDB/KDevelop. Since the program name along with arguments is also within a core dump, I wonder if it doesn't keep the proper path of program that crashed and so asks for it?
I am trying to port some "C" code from Solaris to Linux. I have a Dell PowerEdge R610 with an Intel Xeon E5504 quad core processor running Red Hat Linux Enterprise 5.3. I am compiling in 64 bit mode. I have managed to get the code compiled and linked, but when I attempt to execute it, I get a core dump in one of the C library calls (like strcpy or printf.)
I have a static library that contains our own code that makes the call to the C library. If I move the library method into the source file with the main method and rename it to be certain that I am executing my method instead of the method in our library, the call succeeds. Eventually another static library call is made that results in a core dump in the shared object. I compile my library code into a static library with gcc as:
Assume someone bind a particular process to a particular CPU core(In multi core machine) by using sched_setaffinity() like functions. Then how we can get that process running core id and CPU core utilisation of that process on that running CPU core(Pragmatically or by a Linux command)?.
How can we run the linux process like tar on multiple core? For example if we want to build the kernel we can use -j4 to distribute process of 4 different core. Is it possible to run long time consuming process on mulitple core?
I have written a simple script which has to find required patterns from a bunch of files ( where each file is around 2 GB each,which contain the output of seq 1 10000000000000) on an 8 core machine.I am current forking 6 child processes which run simultaneously on 6 cores of the processor & have to search for the required pattern in 6 different files & inform the parent process when a pattern is found using a PIPE.
The problem is,when a child process is done reading a text file looking for a pattern,it is becoming a zombie process.It exits cleanly when i put a $SIG{CHLD} = "IGNORE"; in the script.Can any one tell me whats going on & how do i improve the communication between child and parent processes?
I have a process which runs for a while and then due to some unknown reason ends abruptly without giving a core. I tried looking in the logs and after watching for a pattern am still not 100% sure of the reason. So I was wondering if there is a way to catch the signal which ends the process and print some values in the handler function. I have called the handler for SIGTERM and SIGABRT functions but none of these are getting triggered. I looked online and did not find any other option. Can you please suggest if tehre is any other signal that can be caught for this unknown abrupt termination.
I have a root process (on linux) that forks a child and the child process then drops privileges by doing a setuid() to a normal user. After the child setuid()'s, it is of course impossible for it to gain root again by itself. But since the main process is still running as root, i was wondering if there was a simple/smart way of getting the root-master-process to elevate the child back to root (or maybe just to another non-privi uid). Is there some way to do a setuid() on another pid? or maybe something can be done through /proc/<pid>/? Killing the child is not an option (because its what it does today and im trying to find a smarter way). (The program is apache2's mpm-itk worker and the "child" is the actual apache2 process serving a page.)