Programming :: Core Dump Analysis Require A Program Name / Why Is So?
Jul 21, 2010
To analyse a coredump, I need to specify program name/path in GDB/KDevelop. Since the program name along with arguments is also within a core dump, I wonder if it doesn't keep the proper path of program that crashed and so asks for it?
I want to generate core dump files from my program when it crashes. Its a pretty big process and has about 10-11 threads in it.I have followed the documentation to enable core dump by setting ulimit to unlimited etc. I quickly tried "A demo program creating a core dump" from the following webpage, which succeeds in Segfault and dumping a core file in the directory that I configured.However, I tried running my original program and caused it to crash. I did this by making calls to kill(), raise() or the same null pointer access as shown in the webpage above. In each case, my program crashed but did not generate a core dump file. Am I missing something?My program is in C++ and my environment is Redhat 9.0 (kernel 2.4.20)
Going through the "Why do I NOT get a core dump?" section on the same webpage as above, I can see two potential problems. One - there are issues with the suid/sgid (bullet # 6). I am not able to change any settings with suid because my system does not contain either /proc/sys/fs/suid_dumpable or /proc/sys/kernel/suid_dumpableTwo, my program has threads in it and the bullet # 8 is the problem.
I am developing an application whose executable is generated inside a certain folder hierarchy (say: /DevPath/MyProject/bin). My source code is located in a different branch of this hierarchy (say: /DevPath/MyProject/src). When my app crashes, its core files are stored in /DevPath/MyProject. I'm developing the app on a pc, but running it on a separate platform in which i can only execute it. Folder hierarchy is the same as above on both computers. Usually, when a new executable version is ready, we update both the executable and the source code on the target platform, transferring all the new /DevPath/MyProject folder on it. But sometimes it can really be a bother, so we update only the executable.
1)In the case we only update the executable, keeping an old source code version, and the app generates a core file, can i trust the backtrace produced by gdb in this case? I.e., does gdb need the latest source code files or it just needs the debugging information?
2) (More radical question ) Do i really need to keep the source code on the target platform for core dump analysis or i just need the executable?
In one of our core dump we have the followings in the core back trace:
#0 0xb77bf947 in raise () from /lib/tls/libc.so.6 #1 0xb77c10c9 in abort () from /lib/tls/libc.so.6 #2 0xb77f56ba in __fsetlocking () from /lib/tls/libc.so.6 #3 0xb77fcf7f in mallopt () from /lib/tls/libc.so.6 #4 0xb77fd022 in free () from /lib/tls/libc.so.6
It occurred in a memory block free operation. From our analysis, there seems no issue relate the the memory block it self. The memory pointer pointed to the right memory block to be freed and the contents of the memory seems right (not corrupted), in one world, there is nothing obviously wrong. Does any one have any ideas what could be wrong when seeing about?
I remotely access to my OPENsuse 11. I am using TightVNC program. There is nothing wrong with accessing. But I have to start an analysis program, an this analysis takes about 4 to 6 hours. Therefore, I want to disconnect and let my analysis continue.But after disconnecting from suse, the program i started stops to work. Does anybody know how not to kill the program while i am not connected?
I am trying to port some "C" code from Solaris to Linux. I have a Dell PowerEdge R610 with an Intel Xeon E5504 quad core processor running Red Hat Linux Enterprise 5.3. I am compiling in 64 bit mode. I have managed to get the code compiled and linked, but when I attempt to execute it, I get a core dump in one of the C library calls (like strcpy or printf.)
I have a static library that contains our own code that makes the call to the C library. If I move the library method into the source file with the main method and rename it to be certain that I am executing my method instead of the method in our library, the call succeeds. Eventually another static library call is made that results in a core dump in the shared object. I compile my library code into a static library with gcc as:
In my program, I fork() to get a child process. Because of some problem, child process terminates by a segmentation fault. Parent process is still running. I have compiled my code with -g option. I have done: ulimit -c unlimited. I am not getting core dump of the child process. How can I get the core dump of child process?
I am using RHEL 4.7 (32bit) on HP Proliant 380G6 series server. We are using Electric Cloud Agents on these servers. Nowadays we are facing some memory issues and its creates some kernel panic and then restarts the server. When i reported the issue to my application team, they asked me to come with the core dump. I googled it enough, then i set ulimit value as unlimited. (previously it was 0, then i made a entry in /etc/profile file as follows ulimit -c unlimited) But still whenever my server restarts due to that kernel panic, it couldnt generate the core dump. My application was installed on /opt
I have used Dump Command to dump the application files. For Full backup the level 0 is working fine. For incremental backup I used the level 1 or 2 it is getting the error as
DUMP: Only level 0 dumps are allowed on a subdirectory DUMP: The ENTIRE dump is aborted.
The code I used =============================== #!/bin/bash #Full Day Backup Script #application folders backup #test is the username now=$(date +"%d-%m-%Y") [Code]...
i just touch linux, may i know how can i convert the core dump file to a readable textfile, which include all the information, which is in core dump, such as all variables, threads information, call trace for each tasks, and so on. i know use the GDB can view this, but it won't dump all the informations to one text file. but sometimes, people want to view the core dump reason without Linux environment.
I am looking for a mp3 file analysis program ( shell preferred / or X ) - something that would give me similar output as >LAME< does during the encoding phase.
Quote:
Frame | CPU time/estim | REAL time/estim | play/CPU | ETA 9342/10156 (92%)| 0:06/ 0:06| 0:06/ 0:06| 39.940x| 0:00 32 [ 80] %***
[code]....
any app that can analyze VBR/ABR filez - not just output a bogus bitrate, but return a more detailed info. LAME has a '-g' (run graphical analysis) option which has to be enabled during the compile time - tried several ways and -g is still disabled, plus there is not much info on -g and i do not even know if this is what I am looking for.
I've a program that launches new processes, and wait for them to die before it exits. So, for example, my program is a process, and it launches 3 more processes, and when the 3 child processes end, it will exit.
As you see, at end of the example, the program used a total number of 4 processes.
1 - Now, I'm running this program in a CPU with 4 cores. This means that the program used each core for each process?
Assume someone bind a particular process to a particular CPU core(In multi core machine) by using sched_setaffinity() like functions. Then how we can get that process running core id and CPU core utilisation of that process on that running CPU core(Pragmatically or by a Linux command)?.
I am currently looking for tools for static/dynamic code analysis for embedded Linux system development (both device driver and user space apps). We will use Eclipse IDE and C++ lanuage. I hope the tools are easy-to-use, reliable, popular, better with good supports, and not-too-expensive. I already find a list of tools at WiKi, however, I don't have time to try them all. Could anyboy please recommend me a few? If you can tell me briefly about their pros and cons, that will be the bet.
I have a process which runs for a while and then due to some unknown reason ends abruptly without giving a core. I tried looking in the logs and after watching for a pattern am still not 100% sure of the reason. So I was wondering if there is a way to catch the signal which ends the process and print some values in the handler function. I have called the handler for SIGTERM and SIGABRT functions but none of these are getting triggered. I looked online and did not find any other option. Can you please suggest if tehre is any other signal that can be caught for this unknown abrupt termination.
I'm trying to write a C program that extends an array to any user inputed size.
Code: if (arraysize == 0) { arraysize = (int) pos + 1; a = (int *) calloc (arraysize,sizeof(int)); for (i = 0 ; i < arraysize ; i++ ) a[i] = -1; code....
The program dumps with that sequence of inputs everytime, but might dump an input before or after if different positions are requested. Interestingly, when I tested pos = 2000..2008, I got no dumps. So is realloc somehow trying to extend the array into bad space?
I would like to create a code that is working on windows XP. I would like to have it like visual basic or very simple so that I can run some execlp, exec, or command="program.exe parameters" with also some internet support?GCC ? G++ ? but those require some DLL to be installed on any PC before running; it is for a friend to simplify his operations
I am seeing cores dumped in a particular directory continuously. I am sure there must a script that's continuously starting a process which is core-dumping and dying. But, how can I find which process is it?
Obviously I don't know that much about CPU architecture. I have a fancy new quad core CPU in the machine I just built. Now, given the current state of technology, two or many times three of those cores are pretty much sitting idle.My question is: Is there any way to utilize those extra cores with programs not optimized to take advantage of them? If I have some CPU intensive application, is there any way to tell it to utilize a core that's idling instead of the first core that's probably already got a bunch of processes running on it?
I am trying to install a program on my Linux system (Fedora core 5) but it fails because there is no fort77 compiler. I know that I have working ifort on my system, but I need fort77. It looks like that the program that I am trying to install can also be compiled with g77, but again this one is also missing. how I can get these compliers and get them work on my system.
I will have to code this. However I am lacking of time since I have too much to do. make a short code bash/dash to prompt the country with Zenity, then, get the PLS or m3u url and prompt with another zenity which radio to play. http://www.listenlive.eu/index.html
My system has Intel Xeon 4-core cpu(hyperthreading 8-core) and run 64-bit linux. I want to allocate one core for general process(kernel process & other processes). And then, I want to allocate the rest of core for the specific multi-thread program.
Q1: I know that I can pthread_setaffinity() for user-mode thread and mpstat for monitoring. So, how can I allocate a core for kernel process and monitor it?
Q2: How can I restrict use of the cores for the multi-thread program? I don't want kernel process to use the cores for the multi-thread program.