Software :: Install A Program On System (core 5) But Fails Due To Fort77
Apr 9, 2010
I am trying to install a program on my Linux system (Fedora core 5) but it fails because there is no fort77 compiler. I know that I have working ifort on my system, but I need fort77. It looks like that the program that I am trying to install can also be compiled with g77, but again this one is also missing. how I can get these compliers and get them work on my system.
We have a small cluster of 20 HP systems, all running CentOS 5.3 in an NFS-root environment. Half are quad-socket, quad-core Xeon E7340 @ 2.40GHz (total 16 cores), the other half are 8-socket, quad-core Opteron 8354 (total 32 cores). All systems have a Mellanox Infiniband adapter ("Mellanox Technologies MT25418 [ConnectX VPI PCIe 2.0 2.5GT/s - IB DDR / 10GigE] (rev a0)")
With kernel 2.6.18-128.1.6.el5, infiniband works fine on all systems.
With the update to kernel 2.6.18-164.11.1.el5 (and both types of node running the same NFS-root image), the 16-core Xeons still work fine. Infiniband no longer works on the 32-core Opterons. Specifically, either the ib0 interface fails to appear, or it does appear but when configured with an IP address, doesn't actually work. In either case, loading the IB kernel modules takes a long time, but I haven't instrumented the load script yet to see which module, if any, is at fault. More errors listed below.
However, if I tweak the BIOS of the 32-core systems to reduce the per-socket core count to 2 (so effectively 8-socket, dual-core, down to a total of 16 available cores), Infiniband starts working again. Putting it back to 32-cores makes it fail. Booting the older kernel makes it work again. In summary: old kernel, IB works on all systems. Newer kernel, IB only works on 16-core systems.
Updating the IB firmware from 2.5.0 to 2.7.0 (latest available) doesn't help. I also did a full 'yum update' to make sure that libmlx4, openibd all other associated packages were up-to-date. Doesn't help either.
Some errors that appear on 32-core nodes:
ib_query_port failed (-16) for mlx4_0 ib_query_port failed (-16) for mlx4_0 mlx4_core 0000:04:00.0: SW2HW_MPT failed (-16) mlx4_core 0000:04:00.0: SW2HW_MPT failed (-16)
I have a desktop system (P55-USB3 + Core i7 + Ubuntu 10.10) that fails to suspend/resume from memory. So I'm trying to diagnose the problem. The first obstacle was easy enough --- when I put the system to sleep to memory, the computer comes back alive right away. A look at /var/log/kern.log revealed that one USB device (usb10) failed to suspend, and from there I was able to pin it down to the USB3 controller in the BIOS. Disabled that and this problem disappeared.
Now, I'm stuck with the second obstacle. The computer successfully goes into the suspend mode, but it hangs during resume. The monitor doesn't get any video signal, and it fails to respond to ping (netconsole doesn't work either.) After a forced reboot (that involves unplugging the power cable), /var/log/kern.log doesn't contain any interesting entries. All the pm_test modes from freezer to core succeed (I followed [URL] I've also tried pm_trace (https://wiki.ubuntu.com/DebuggingKernelSuspend) but again kern.log nor dmesg contains anything after the suspend. Either the write didn't survive the forced power off, or the resume is failing even before that. The motherboard doesn't have a serial port nor firewire, so getting kernel logs through them is not a possibility, either.
I am trying to update Flash-player and yast2-core as prompted by today's update icon.
I am now faced with a loop that goes like this: 1) click on icon 2) window pops up, "2 packages selected" 3) click on apply, subwindow pops up 4) click on continue, "waiting for authentication" 5) after minutes of waiting, click on cancel, nothing happens 6) click on icon killing subwindow 7) now back to 2)
Aborting the process, starting at 1) again, same mess
I've a program that launches new processes, and wait for them to die before it exits. So, for example, my program is a process, and it launches 3 more processes, and when the 3 child processes end, it will exit.
As you see, at end of the example, the program used a total number of 4 processes.
1 - Now, I'm running this program in a CPU with 4 cores. This means that the program used each core for each process?
I want to generate core dump files from my program when it crashes. Its a pretty big process and has about 10-11 threads in it.I have followed the documentation to enable core dump by setting ulimit to unlimited etc. I quickly tried "A demo program creating a core dump" from the following webpage, which succeeds in Segfault and dumping a core file in the directory that I configured.However, I tried running my original program and caused it to crash. I did this by making calls to kill(), raise() or the same null pointer access as shown in the webpage above. In each case, my program crashed but did not generate a core dump file. Am I missing something?My program is in C++ and my environment is Redhat 9.0 (kernel 2.4.20)
Going through the "Why do I NOT get a core dump?" section on the same webpage as above, I can see two potential problems. One - there are issues with the suid/sgid (bullet # 6). I am not able to change any settings with suid because my system does not contain either /proc/sys/fs/suid_dumpable or /proc/sys/kernel/suid_dumpableTwo, my program has threads in it and the bullet # 8 is the problem.
I am seeing cores dumped in a particular directory continuously. I am sure there must a script that's continuously starting a process which is core-dumping and dying. But, how can I find which process is it?
Obviously I don't know that much about CPU architecture. I have a fancy new quad core CPU in the machine I just built. Now, given the current state of technology, two or many times three of those cores are pretty much sitting idle.My question is: Is there any way to utilize those extra cores with programs not optimized to take advantage of them? If I have some CPU intensive application, is there any way to tell it to utilize a core that's idling instead of the first core that's probably already got a bunch of processes running on it?
I have tried installing Vmware-Server 1.0.10 on Fedora Core 12. After installing all packages it fails on building the vmmon module. In the internet i found many patches and vmware-any-any-updates...but nothing worked. My Kernel Vresion is 2.6.31.12-174.2.3.fc12.i686.PAE...
To analyse a coredump, I need to specify program name/path in GDB/KDevelop. Since the program name along with arguments is also within a core dump, I wonder if it doesn't keep the proper path of program that crashed and so asks for it?
When I try to install Debian 8 on my laptop I get this rather odd error. The install fails every time. I've managed to get as far as choosing which Desktop Environment I want and the shortly after it shuts off. It shows 4 messages
The machine I'm trying to install on is a Gateway NV53, 4gb of RAM, AMD Athlon II x64.
At first I thought it might be my disc so I burnt another DVD using the 4.3GB DVD image I had downloaded. I checked the disc and it verified with the image and so I tried again with the same results as above. Any clue what might be causing this? I'm sure it isn't my hardware, Arch has been running fine for almost 6 months and never seemed to care.
I am trying to update my OS from Opensuse 11.2 to 11.3 using the installation DVD (checked OK). Everything is OK during the installation process until the message "System is going to reboot" is displayed with a countdown : when the countdown is finished, the screen becomes black and the screensaver starts, but the system does not reboot. When I reboot manually, after selecting "Opensuse 11.3 desktop" on the first menu, the screen becomes black and the screensaver starts.
So the installation seems incomplete and I do not know how to finalize it. When I select the failsafe option on the first menu, it seems to work but some behaviours are quite "strange"... When I choose update instead of installation at the beginning, the behaviour is the same.
I do not know if it is linked but the firmware tests started from the installation DVD show the following errors :
i'm trying to build a web server but I keep having a problem. Everything loads fine until I get to the "Install base system" part of the installation and it puts up a red screen warning me that the install has failed. My uncle gave me his ubuntu discs and even some different hard drives, but I keep getting the same problem at the same point of the installation. Here is the rundown of everything i did.
System #1 - ASUS P5G41-M m/b with 4GB memory Intel Core 2 quad q9300 cpu two western digital 640GB hard drives
I was testing it out to run a raid 1 just to learn how to make a software raid array. its my first try at making a raid array. The first time i used ubuntu server 10.04 32bit and the system failed at the install base system point. I then used boot n nuke to clean the drives and tried 10.04 64bit. it failed at the same exact point. my uncle gave me 2 western digital 1.0TB hard drives and I tried again. I got the same results. At that point i gave all the hardware back to my uncle and he gave me a server board to try out because he bought it and hasnt touched it. so I built a second system.
I have a problem that I can't login to SUSE in graphical mode. I get to the login prompt; enter my username and password; SUSE starts to do the login but then crashes back to the login prompt. Looking in /var/log/messages doesn't tell me anything useful. However, I noticed that my SUSE system partition is full (at 20 GB). So I think this is the culprit that is stopping my login.
Unfortunately, I can't extend my system partition as it is ext4 (SUSE default) but parted (from SUSE 11.2 live cd) complains that it can't do anything with ext4.
I'm using OpenSUSE11.2 x86_64 KDE 4.3.5 Linux 2.6.31.14-0.1-desktop
I want to try Debian on my Asus Eee netbook and I'm trying to follow the instructions in URL... But just copying the ISO file to the USB drive then trying to boot from it doesn't seem to work. I just get "Missing operating system".
The Eee can use an external optical drive as well but that failed also. I'm sure I need to do more to prepare the USB drive or CD? Can I prepare the USB Drive or CD on my Windows system, and make it boot on the netbook (which has another Linux distro on it now)?
I am trying to upgrade from Fedora 12 to Fedora 13 using "Preupgrade" but it fails in the last part and tells me that the system cannot find the previous system (fedora 12). I have tried to do the upgrade several times and always the error is the same.
I will be relocating to a permanent residence sometime in the next year or two. I've recently begun thinking about the best way to implement a home-based network. It occurred to me that the most elegant solution might be the use of VM technology to eliminate as much hardware and wiring as possible.My thinking is this: Install a multi-core system and configure it to run several VMs, one each for a firewall, a caching proxy server, a mail server, a web server. Additionally, I would like to run 2-4 VMs as remote (RDP)workstations, using diskless workstations to boot the VMs over powerline ethernet.The latest powerline technology (available later this year) will allow multiple devices on a residential circuit operating at near gigabit speed, just like legacy wired networks.
In theory, the above would allow me to consolidate everything but the disklessworkstations on a single server and eliminate all wired (and wireless) connections except the broadband connection to the Internet and the cabling to the nearest power outlets. It appears technically possible, but I'm not sure about the various virtual connections among VMs. In theory, each VM should be able to communicate with the other as if it was on the same network via the server data bus, but what about setting up firewall zones? Any internal I/O bandwidth bottlenecks? Any other potential "gotchas", caveats, issues? (Other than the obvious requirement of having enough CPU and RAM).Any thoughts or observations welcome, especially if they are from real world experience in a VM environment. BTW--in case you're wondering why I'm posting here, it's because I run Debian on all my workstations/servers (running VirtualBox as a VM for Windows XP on one workstation).
I have a lot of RAR files and ISO. Is there a program like Winrar which could open them in Linux? Cause now it only opens zip files . Also I would like to know what the best package manager is (I mean the easiest -used to use the Software Manager in Mint 9 Xfce).At last I would like to know if there is a good program to make disk images to reset the system.
I am running CPU tests on a radio controller to determine max simultanious calls. A tool using top was developed so that we could get a good look at what exactly was happening on the process level, however we are mainly interested in one object running on the box.The box has a single core Celeron processsor running the Wind River Linux platform. The CPU usage from my object is frequently spiking over 100%. Doing some research online so far has led me to the fact that a multicore processor can do this however I have found no mention of a single core processor displaying this behavior.
I am trying to install 11.2 on a Dell Poweredge 2550 - two processors,all scsi, raid disabled, ATI grahpics. Fails at "failure to mount clich file system" - reboot.
I've tried searching the forums / google and haven't been able to come up with anything... in Debian-based distros there's an option that can be set to allow boot concurrency so that multiple processor cores can be used for the boot process. Windows also has an option similar to this to specify how many processor cores to use for boot.
Is there an option for multi-core booting in Fedora?
I recently assembled a new desktop computer, the following is the hardware details
CPU: intel core i7 2600 Motherboard chipset: intel Z68, GIGABYTE GA-Z68A-D3H-B3 Video card: Geforce GTX 560 Ti
I tried to install ubuntu 11.04 i386 and X86-64 version. Both of them are quite unstable.The system is easy to crash because of network problem and the internet connection is extremely slow. I also install window 7 in the same computer while the wired network is doing well.
In the past, I've deployed new 64 bit systems and I've worked on and developed on 64 bit systems. But until a week ago, my workstation was a 32 bit system. Now, it is a 64 bit quad core Phenom II system, and I suppose I need to start the migration to 64 bit Linux. I do not want to blow off my system and rebuild it. This particular system dates back a decade and through many many updates. There is some digital debris in it, but there is also a fair amount of customization that I have implemented either for my own purposes or for customers, and to lose that customization would represent a headache for me.
What I want to do is install a 64 bit system over top of the 32 bit system. It is my hope that doing this would install the necessary 64 bit libraries, while not impacting the existing 32 bit libraries (except with some possible symlink problems). I then, hopefully, could boot into a 64 bit kernel while still running 32 bit programs. Is this feasible? My backup system is comprehensive; I COULD just try it and back up if my system became hosed. But I'd rather not; I have a lot of work to do and I'd rather not learn by doing in this case.
I am trying to compile systemc. Configuration is done OK and Makefiles are created. As soon as the "make" command i issue, recursively reaches the "utils" directory, errors are produced -declaration and include errors.
The files referenced by the errors DO exist and i hava managed to give g++ "-I ${LD_INCLUDE_PATH}, ie the variable, where all header dirs are listed. Libraries and compilers for gcc3.3 and g++3.3, i believe, are installed OK. code...
i am running gigabyte GA-M68M-S2P and AMD sempron 2.7. the problem is when i try to run dual core. it will boot and run for 2mins then it crashes. single core runs perfect.