CentOS 5 :: Make Daemons Dump Core?
Oct 22, 2010
I modified the following files according to all I found after googling the net:
/etc/security/limits.conf
* soft core unlimited
/etc/profile
ulimit -c unlimited[code]....
I don't get a core file when I kill -11 <pid_of_sleep>
System is centos 5.3
View 3 Replies
ADVERTISEMENT
Oct 7, 2009
I want to generate core dump files from my program when it crashes. Its a pretty big process and has about 10-11 threads in it.I have followed the documentation to enable core dump by setting ulimit to unlimited etc. I quickly tried "A demo program creating a core dump" from the following webpage, which succeeds in Segfault and dumping a core file in the directory that I configured.However, I tried running my original program and caused it to crash. I did this by making calls to kill(), raise() or the same null pointer access as shown in the webpage above. In each case, my program crashed but did not generate a core dump file. Am I missing something?My program is in C++ and my environment is Redhat 9.0 (kernel 2.4.20)
Going through the "Why do I NOT get a core dump?" section on the same webpage as above, I can see two potential problems. One - there are issues with the suid/sgid (bullet # 6). I am not able to change any settings with suid because my system does not contain either /proc/sys/fs/suid_dumpable or /proc/sys/kernel/suid_dumpableTwo, my program has threads in it and the bullet # 8 is the problem.
View 1 Replies
View Related
May 27, 2010
To get core dump from my program, I execute the following commands from the terminal:
ulimit -c unlimited
myprogram
After program crash, I see core file in the home directory. How can I make this mode persistent, to have code dump always?
View 4 Replies
View Related
Jul 27, 2011
In my program, I fork() to get a child process. Because of some problem, child process terminates by a segmentation fault. Parent process is still running. I have compiled my code with -g option. I have done: ulimit -c unlimited. I am not getting core dump of the child process. How can I get the core dump of child process?
View 1 Replies
View Related
Dec 29, 2010
I am using RHEL 4.7 (32bit) on HP Proliant 380G6 series server. We are using Electric Cloud Agents on these servers. Nowadays we are facing some memory issues and its creates some kernel panic and then restarts the server. When i reported the issue to my application team, they asked me to come with the core dump. I googled it enough, then i set ulimit value as unlimited. (previously it was 0, then i made a entry in /etc/profile file as follows
ulimit -c unlimited) But still whenever my server restarts due to that kernel panic, it couldnt generate the core dump. My application was installed on /opt
The attached document has the kernel panic logs
View 3 Replies
View Related
Oct 21, 2010
I am developing an application whose executable is generated inside a certain folder hierarchy (say: /DevPath/MyProject/bin). My source code is located in a different branch of this hierarchy (say: /DevPath/MyProject/src). When my app crashes, its core files are stored in /DevPath/MyProject. I'm developing the app on a pc, but running it on a separate platform in which i can only execute it. Folder hierarchy is the same as above on both computers. Usually, when a new executable version is ready, we update both the executable and the source code on the target platform, transferring all the new /DevPath/MyProject folder on it. But sometimes it can really be a bother, so we update only the executable.
1)In the case we only update the executable, keeping an old source code version, and the app generates a core file, can i trust the backtrace produced by gdb in this case? I.e., does gdb need the latest source code files or it just needs the debugging information?
2) (More radical question ) Do i really need to keep the source code on the target platform for core dump analysis or i just need the executable?
View 1 Replies
View Related
Nov 8, 2010
In one of our core dump we have the followings in the core back trace:
#0 0xb77bf947 in raise () from /lib/tls/libc.so.6
#1 0xb77c10c9 in abort () from /lib/tls/libc.so.6
#2 0xb77f56ba in __fsetlocking () from /lib/tls/libc.so.6
#3 0xb77fcf7f in mallopt () from /lib/tls/libc.so.6
#4 0xb77fd022 in free () from /lib/tls/libc.so.6
It occurred in a memory block free operation. From our analysis, there seems no issue relate the the memory block it self. The memory pointer pointed to the right memory block to be freed and the contents of the memory seems right (not corrupted), in one world, there is nothing obviously wrong. Does any one have any ideas what could be wrong when seeing about?
View 1 Replies
View Related
Jul 21, 2010
To analyse a coredump, I need to specify program name/path in GDB/KDevelop. Since the program name along with arguments is also within a core dump, I wonder if it doesn't keep the proper path of program that crashed and so asks for it?
View 3 Replies
View Related
Sep 20, 2010
I am trying to port some "C" code from Solaris to Linux. I have a Dell PowerEdge R610 with an Intel Xeon E5504 quad core processor running Red Hat Linux Enterprise 5.3. I am compiling in 64 bit mode. I have managed to get the code compiled and linked, but when I attempt to execute it, I get a core dump in one of the C library calls (like strcpy or printf.)
I have a static library that contains our own code that makes the call to the C library. If I move the library method into the source file with the main method and rename it to be certain that I am executing my method instead of the method in our library, the call succeeds. Eventually another static library call is made that results in a core dump in the shared object. I compile my library code into a static library with gcc as:
[Code]....
View 3 Replies
View Related
Nov 12, 2009
i just touch linux, may i know how can i convert the core dump file to a readable textfile, which include all the information, which is in core dump, such as all variables, threads information, call trace for each tasks, and so on. i know use the GDB can view this, but it won't dump all the informations to one text file. but sometimes, people want to view the core dump reason without Linux environment.
View 2 Replies
View Related
Jun 4, 2009
The server is a Dell 1650 1U server, which a fresh install of CentOS 5.3, formatting the drive in the process. It has two e1000 NIC's and based on the wiring eth0 is the public segment and eth1 is the private segment.Both nic's make a solid 100mb connection to the cisco switch, and I can ping both interfaces "just fine".
If I run nmap on a remote PC that is also dual-homed in this fashion, I get the same response as to which ports are open (essentially everything since I disabled iptables temporarily to troubleshoot this issue) on both interfaces, which would be the expected result.But... and here is the source of my confusion... I can see the server by pinging it, I can use SSH to make such a connection and log in, but I cannot get apache to "answer the phone" so to speak.Runing "tcpdump -i eth0" shows me that my browser on a remote PC is attempting to connect, but without reply.If I send my browser to the ip address of eth1, I get the default CentOS new web server page as is the norm.Nothing shows in my /var/log/httpd/access_log or error_log at all... apache simply isn't answering.The httpd.conf is the original, and the one line I attempted to 'adjust' was the "listen 80" line, which I added the ip addresses specifically, but that didn't change anything.I am unsure how nmap can indicate the port is open and there is a service behind it, when on the same PC that ran nmap, I cannot connect at all.Might anyone have a clue to toss me so I can work to figure this out?
View 1 Replies
View Related
Mar 8, 2010
I have set of DVD disks. I'd like to rip them to .mkv files, but with all information - i.e. all subtitles, and all audio tracks. Is there any tool on Linux that I could use to do it? I found some Gentoo howto about ripping, but it requires writing shell scripts, and I'd rather use something with clickable interface.
View 1 Replies
View Related
Jan 27, 2010
We have a small cluster of 20 HP systems, all running CentOS 5.3 in an NFS-root environment. Half are quad-socket, quad-core Xeon E7340 @ 2.40GHz (total 16 cores), the other half are 8-socket, quad-core Opteron 8354 (total 32 cores). All systems have a Mellanox Infiniband adapter ("Mellanox Technologies MT25418 [ConnectX VPI PCIe 2.0 2.5GT/s - IB DDR / 10GigE] (rev a0)")
With kernel 2.6.18-128.1.6.el5, infiniband works fine on all systems.
With the update to kernel 2.6.18-164.11.1.el5 (and both types of node running the same NFS-root image), the 16-core Xeons still work fine. Infiniband no longer works on the 32-core Opterons. Specifically, either the ib0 interface fails to appear, or it does appear but when configured with an IP address, doesn't actually work. In either case, loading the IB kernel modules takes a long time, but I haven't instrumented the load script yet to see which module, if any, is at fault. More errors listed below.
However, if I tweak the BIOS of the 32-core systems to reduce the per-socket core count to 2 (so effectively 8-socket, dual-core, down to a total of 16 available cores), Infiniband starts working again. Putting it back to 32-cores makes it fail. Booting the older kernel makes it work again. In summary: old kernel, IB works on all systems. Newer kernel, IB only works on 16-core systems.
Updating the IB firmware from 2.5.0 to 2.7.0 (latest available) doesn't help. I also did a full 'yum update' to make sure that libmlx4, openibd all other associated packages were up-to-date. Doesn't help either.
Some errors that appear on 32-core nodes:
ib_query_port failed (-16) for mlx4_0
ib_query_port failed (-16) for mlx4_0
mlx4_core 0000:04:00.0: SW2HW_MPT failed (-16)
mlx4_core 0000:04:00.0: SW2HW_MPT failed (-16)
[Code]....
View 5 Replies
View Related
Dec 7, 2010
I'm posting because I've read everything I can find on google and in the forum and the man page and still can't get it to work. I did read the FAQ, I hope I have adhered to it.I've tried several things and I don't remember exactly everything I tried and in what order.I've got several (12) HP ProLiant DL140 G3 servers running CentOS 5 that lockup about once a week. These are in a remote colo cage so I all i have access to is the built-in HP lights-out management interface, which includes a console, and ssh. I've been trying to get kdump setup to try to figure out what's going on. As an aside, if I run top on the console (via the management interface) the servers stay up for about a week, if I don't run top they crash within about 48 hours.I've used yum update to update to the latest available kernel (2.6.18-194.26.1.el5.x86_64) and installed the debuginfo and debuginfo-common RPMs from http://debuginfo.centos.org/5/x86_64/I have a single command in the /etc/kdump.conf file:ext3 /dev/sda5
View 4 Replies
View Related
Sep 6, 2010
I have used Dump Command to dump the application files. For Full backup the level 0 is working fine. For incremental backup I used the level 1 or 2 it is getting the error as
DUMP: Only level 0 dumps are allowed on a subdirectory
DUMP: The ENTIRE dump is aborted.
The code I used
===============================
#!/bin/bash
#Full Day Backup Script
#application folders backup
#test is the username
now=$(date +"%d-%m-%Y")
[Code]...
View 2 Replies
View Related
Oct 18, 2010
Is there any command available inorder to read the server crash dump files?
View 4 Replies
View Related
Aug 31, 2010
I have a need for a complete software list off of an existing RHEL 5 system. I need this list to compare software installed to software on a government approved software list to ensure the compliance of this system. I was given an RPM Dump, listing all the 2000+ packages on the system... This does not translate to the Government Approved software list that I have to compare to. I do not have access to this system myself, so what ever method is prescribed for extracting the list I will have to pass along. What I need is either:
1) A way to convert an RPM Package dump to actual software names and versions, etc.
OR
2) A method to extract a complete list of software (titles/versions/etc) from an instance of RHEL5.
Example: Instead of knowing that "pango-devel-1.14.9-6.el5" exists on the system I need to know that "Pango v3.0.x" is installed on the system. Many packages do not relate on a one to one basis with a specific piece of software via inter-dependencies etc. (not to mention the version of the software, not the version of the package/library). The Pango example is not the best example as you can see what software is likely the source of this package; however just because this package is installed, I cannot grantee 100% that the Pango software suite is installed, just that this package was installed...
View 6 Replies
View Related
Mar 20, 2010
i cant find any threds in this forum how to make raid 5 on fedora core 10 how to make 1?
View 1 Replies
View Related
Apr 26, 2011
1. I installed oprofile 0.9.4 on Centos Linux 5.5 this past Saturday. Since then I have been trying to learn how to use oprofile 0.9.4 on Centos Linux 5.5. If I follow the following steps running as root,
[Code]....
I obtain the following warning: warning: the last modified time of the binary file does not match that of the sample file for /home/frankc/DQTTest5/MatchUpLib/lirh5g_deb/libmdMatchup.so Either this is the wrong binary or the binary has been modified since the sample file was created. warning: the last modified time of the binary file does not match that of the sample file for /home/frankc/DQTTest5/MatchUpTest/lirh5g_deb/MatchUpAccurate.exe
I am wondering how to prevent this warning message from occuring because it indicates our profiling results may be incorrect.
2. My second question is : when I use opcontrol --start --session-dir=/home/frankc/DQTTest/Mary5Test and profile my application using /home/frankc/DQTTest5/MatchUpTest/lirh5g_deb/Mary48.exe --run, I encounter an error when I do opcontrol --dump. The error is: Unable to complete dump of oprofile data: is the oprofile daemon running? Why is the oprofile daemon stopping when I use opcontrol --start --session-dir=/home/frankc/DQTTest/Mary5Test rather than when I use oprofile --start. I checked the permissions on /home/frankc/DQTTest/Mary5Test and it is drwxrwxrwx.
View 3 Replies
View Related
Jul 8, 2009
I have a multi-threaded application using pthreads. On application crash or when signalling with 'kill -s 6' the core file created by the 2.6.18-128 kernel on CentOS 5.3 shows only one single thread. Core file saved with gcore in gdb shows all running threads properly so the problem is clearly in the kernel. I tested CentOS 5.2 (kernel 2.6.28-92) and it works correctly.
what's wrong with CentOS 5.3 kernels?
View 1 Replies
View Related
Jan 19, 2010
I need to upgrade one of our systems from its current distribution, Fedora Core 7, to the most recent version distribution, release 5.4, of the CentOS operating system. Can I do an in-place upgrade of the operating system without any adverse side-effects? Are there any issues that I should be concerned with before proceeding?
View 10 Replies
View Related
Mar 17, 2009
Just installed centOS 52 x86_64 on a Core 2 Duo (E6750). From the /proc/cpuinfo I see that only one processor is detected. Any parameter on the BIOS to be changed?
View 2 Replies
View Related
Jan 22, 2010
Sadly, I google'd the heck out of this and even the vmware community has no answers. So, I decided to turn to the experts and see if you can provide a better solution than [URL]. The system sees both CPU's and all of the cores (cat /proc/cpuinfo). I have tried with HT and without HT (I never run HT anymore; seems to hurt performance in my workloads). I've tried the recent and newest kernels for CentOS 5.4 and still have no luck. There seems to be lots of people having this issue, but no real solutions.
Dell PE R710 with 2x Xeon CPU X5550
kernel 2.6.18-164.11.1.el5 #1 SMP x86_64
Fresh CentOS 5.4 install
View 2 Replies
View Related
Apr 28, 2010
I'm trying to setup CentOS 5.4 x86_64 on a new machine but am running into problems. The machine is: CPU: Intel Core i7-860 RAM: 4GiB DDR3 Motherboard: Intel DQ57TM When booting from the disc, I get to the initial splash screen but almost immediately after that, I get a kernel panic.
None of the lines leading up to it mean much to me so I'm not sure what to copy here but the last line says: <0>Kernel panic - not syncing: Fatal exception
View 14 Replies
View Related
Jun 21, 2010
Can anyone recommend a pre-built Core i7 desktop on which CentOS 5 can be easily installed ? It will not be used for gaming. We are considering the Aspire M5811 and the Vostro 430, however I heard that there is an RHEL bug with the NIC on the latter.
View 1 Replies
View Related
Apr 1, 2011
i am running gigabyte GA-M68M-S2P and AMD sempron 2.7. the problem is when i try to run dual core. it will boot and run for 2mins then it crashes. single core runs perfect.
View 6 Replies
View Related
Nov 3, 2009
Is it possible to back port the centos5 USB HID core driver version 2.6 back to centos4 USB HID core driver version 2.0?
View 2 Replies
View Related
Nov 3, 2010
I just loaded F14 on an old Dell Dimension 3000 with a dual core processor but only one is showing. Here's the output from top:
processor: 0
vendor_id: GenuineIntel
cpu family: 15
model: 4
[Code]....
View 2 Replies
View Related
Mar 27, 2011
I have a command line OCR program called OCR Shop XTR (Vividata corp) that I am using on a system with a 6-core AMD chip. I changed the bios so that the 6-cores were activated, but htop shows me that while the program is running, I am only getting activity on one core (the program maxes out the one core with consistent usage between 97% and 100%).
I have read that many programs are not written to take advantage of multiple core cpu's. However, I am just hoping that there is some way to get this program to take advantage of the extra cores. Does anyone know of a way to invoke programs from the command line which would spread the workload out among additional cores?
Here is the output of uname -a:Linux linux 2.6.37.1-1.2-desktop #1 SMP PREEMPT 2011-02-21 10:34:10 +0100 i686 athlon i386 GNU/LinuxAnd here is the output for one of the cores from cat /proc/cpuinfo:processor : 5
vendor_id : AuthenticAMD
cpu family : 16
model : 10
model name : AMD Phenom(tm) II X6 1100T Processor
stepping : 0
[Code].....
View 5 Replies
View Related
Feb 14, 2010
Just a quick question. Does Kubuntu Karmic support core 2 E8500 out of the box or do you need the SMP kernel?
View 3 Replies
View Related