A process from some software I am running keeps crashing with seemingly no real pattern. I ahve tried using ddd/gdb to run the process in question but everytime it crashes no useful information is returned. I also tried getting a core file with the same result. It seems as though according to linux the program has exited normally.
This obviously points towards the process itself having a bug but there are other instances of the same program running on other machines in the network with no problems at all.
I have made comaprisons of hardware/drivers (lspci etc) installed on various machines and all are exactly the same as the machine in question so my question is (at long last): What else should I be looking for?
Assume someone bind a particular process to a particular CPU core(In multi core machine) by using sched_setaffinity() like functions. Then how we can get that process running core id and CPU core utilisation of that process on that running CPU core(Pragmatically or by a Linux command)?.
I am attempting to install a custom fedora build but it crashed, I then attempted the standard fedora build (.iso) burned on a CD and I got to the boot window but at some point following the colored lines going across the bottom of the screen it froze. symptom - monitor still on not hibernating, num key works caps lock doesn't , hard drive is spinning and so is CD drive. Has anyone had issue loading this build onto similar hardware?
There is a process called STD that uses 90 plus percent of the cpu. If its running when I plug into the network the network crashes. Also can't watch movies our do anything requiring the processor while its running.
I have an application where multiple processes talk to each other.One of the process is crashing repeatedly via a SIG ABRT signal, I have tried to put in a gdb on that process and tried to figure out what the stack is at the point of the crash.the stack.
To see if an application is using all the cores in a system, I wanted to identify which process is running on a core at a given time. Is there any way to get that information?
How can we run the linux process like tar on multiple core? For example if we want to build the kernel we can use -j4 to distribute process of 4 different core. Is it possible to run long time consuming process on mulitple core?
I`m looking for solution. Is it possible to run process on 1 core in UNIX ? From time to time I must start some very CPU consuming process and unfortunately this process is running on all cores... so at this time working on PC is very hard.
I want to know, is there any way to prevent the multi-thread process from crashing if some errors (say, segmentation faults) occur in one of its child threads? I've found pthread_sigmask() function, but that does not seem to work:
My system has Intel Xeon 4-core cpu(hyperthreading 8-core) and run 64-bit linux. I want to allocate one core for general process(kernel process & other processes). And then, I want to allocate the rest of core for the specific multi-thread program.
Q1: I know that I can pthread_setaffinity() for user-mode thread and mpstat for monitoring. So, how can I allocate a core for kernel process and monitor it?
Q2: How can I restrict use of the cores for the multi-thread program? I don't want kernel process to use the cores for the multi-thread program.
In my program, I fork() to get a child process. Because of some problem, child process terminates by a segmentation fault. Parent process is still running. I have compiled my code with -g option. I have done: ulimit -c unlimited. I am not getting core dump of the child process. How can I get the core dump of child process?
I want to generate core dump files from my program when it crashes. Its a pretty big process and has about 10-11 threads in it.I have followed the documentation to enable core dump by setting ulimit to unlimited etc. I quickly tried "A demo program creating a core dump" from the following webpage, which succeeds in Segfault and dumping a core file in the directory that I configured.However, I tried running my original program and caused it to crash. I did this by making calls to kill(), raise() or the same null pointer access as shown in the webpage above. In each case, my program crashed but did not generate a core dump file. Am I missing something?My program is in C++ and my environment is Redhat 9.0 (kernel 2.4.20)
Going through the "Why do I NOT get a core dump?" section on the same webpage as above, I can see two potential problems. One - there are issues with the suid/sgid (bullet # 6). I am not able to change any settings with suid because my system does not contain either /proc/sys/fs/suid_dumpable or /proc/sys/kernel/suid_dumpableTwo, my program has threads in it and the bullet # 8 is the problem.
I've some file with .sh extensions that runs some softwares.Now,how do I stop running that filesI know we run the command ./start_tomcat.sh to start the apache.Is there any command to stop that file/process or is it just kill the process to stop the process
I have recently fresh installed ubuntu 10.10 on my Acer Aspire 5940G laptop. and everything seems to be going great apart from i keep on getting random system lockups when i am transfering files from my ipod or usb disk or copying large files on my hard disk.
I want to start by stating that I don't have any issues that I'm trying to resolve other than a hypothesis. And I could RTFC, but there's a lot of filesystem code that I'd have to familiarize myself with ;) I'm curious what would happen if I were to open a file and then unlink it in the filesystem (but still have the open handle). Then the system crashes.
Specifically: I'm curious as to whether the files inode will still indicate that it has references, but nothing in the filesystem points to it anymore, or whether it's up to the OS to know that it can't write to the space, but as far as the inode is concerned it's free.
I've got a system that has given me problems since day one. It's my oldest kids computer and she seems to open about twenty tabs in Firefox. The computer will freeze and she'll manually hold down the switch to reset. I've instructed her to please stop shutting it down manually but kids never listen.So anyway the thing reboots into initramfs. Seems unable to do anything with the hard disk. Now heres where I run into problems. In the past I've removed the drive and put it into one of my other Ubuntu boxes then ran fsck. fsck always recovers the journel quickly and I pop it back in and all is well.First question or situation if you will. I have tried left and right to get fsck to work from the livecd. If I let the livecd boot up and open a terminal fsck /dev/sda1 comes back with device or resource busy. Apparently the livecd get stuck automounting and causes problems.
I'm really tired of putting this thing in another box. I tried downloading knoppix but it wouldn't burn off for some reason. I've tried booting into rescue mode, but that seems to be missing from the livecd these days?Can I boot into single user mode somehow? Kill off some process that is causing the resource to be busy? I'm thinking once I maybe flagged the drive as dirty and had it clean itself on reboot.. will the livecd pick up on that?ok.. so thats the first situation.. second is upon recently fsck doesn't fix the problem. The drive recovers just fine, but after using the computer for a short while the drive will somehow magically mount as read only.. and then programs will freeze and shutting down is hard to do.
i am running gigabyte GA-M68M-S2P and AMD sempron 2.7. the problem is when i try to run dual core. it will boot and run for 2mins then it crashes. single core runs perfect.
I am getting a segmentation fault with a core dump,running a new C program, but the core file size is set to zero ("ulimit -a") so no core file to use with gdb. I tried "ulimit -c unlimited" both as me and as root, but it doesn't change. Still zero..
I have a multi-threaded application using pthreads. On application crash or when signalling with 'kill -s 6' the core file created by the 2.6.18-128 kernel on CentOS 5.3 shows only one single thread. Core file saved with gcore in gdb shows all running threads properly so the problem is clearly in the kernel. I tested CentOS 5.2 (kernel 2.6.28-92) and it works correctly.
I have a command line OCR program called OCR Shop XTR (Vividata corp) that I am using on a system with a 6-core AMD chip. I changed the bios so that the 6-cores were activated, but htop shows me that while the program is running, I am only getting activity on one core (the program maxes out the one core with consistent usage between 97% and 100%).
I have read that many programs are not written to take advantage of multiple core cpu's. However, I am just hoping that there is some way to get this program to take advantage of the extra cores. Does anyone know of a way to invoke programs from the command line which would spread the workload out among additional cores?
Here is the output of uname -a:Linux linux 2.6.37.1-1.2-desktop #1 SMP PREEMPT 2011-02-21 10:34:10 +0100 i686 athlon i386 GNU/LinuxAnd here is the output for one of the cores from cat /proc/cpuinfo:processor : 5
vendor_id : AuthenticAMD cpu family : 16 model : 10 model name : AMD Phenom(tm) II X6 1100T Processor stepping : 0
I have now installed Wheezy on two different hard drives and in each case it seems only one CPU of my dual core CPU computer is recognized. System Monitor, Gkrellm and lscpu show just one when prior to the new install the old Wheezy showed both CPU's. I have put the hard drive into two other computers with dual core CPU's and all show just one CPU.
Interestingly System Profiler and Benchmark (hardinfo?) > Devices > Processors now show a large amount of processor infomation when with the old Wheezy I would only see both CPU's listed and nothing else.
I recently read in a forum that by default the Linux kernel only activates one of two cores in a dual core processor. Searching online gave one option to find out and that was the mpstat command. I therefore ran the command and got the following output.As the result says, it shows only 1 cpu. I was wondering what I could do to activate both cores in my machine, and whether doing so was going to cause me any problems.
I was installing some stuff and now after the install. The look of my kde is now totally changed and konquerer won't load and vlc also changed but its working. File manager (dolphin) keeps crashing if I try to open the video folder. how to revert back to the old KDE or at least make things right.