I am trying to install Centos onto a Poweredge 2650, with Dell Poweredge Expandable Raid Controller. I have 5 SCSI disks installed, and have created and initialised 2 logical volumes via the SCSI controller setup utility (Ctrl M after boot). After reboot the system reports two logical volumes present.
The Centos installer cannot find any disks when it gets to the disk partitioning/setup step. It reports no disks present. Do I need a specific driver for this controller?
I'm new on Dell PowerEdge 2900. I dont know how to install centOS 5.2 on it. whice OS type that I should select. at first I select RH enterprise 5 and it format HDD. already and I put centOS 5.2 in, but it was deny and eject that cd. when I power off it shown that
"root(hd0,1)filesystem type is ext2f2,partition type0x83 kernel/linux/vmlinz%(ksLocations) Error is : file not found press any key to continue"
I am having issues installing Centos on a Dell Poweredge 1950. I have tried 5.3 and 5.4, and the text only install. The error I get is when I select what keyboard layout to use. An error window pops up and says "Assertion ((C * heads + H) * sector + S == A) at dos.c:624 in function probe_partition_for_geom() failed " I can click ignore and cancel but it instantly pops back up. I have tried to look up the error but can't find any information on it.
I just got my new server today (a Dell Poweredge 2850), and I'm trying to install the latest version of Ubuntu on it, however it fails to detect my hard drives. They are in a raid array (raid 1 I believe). I've never worked with a raid array before, and I'm wondering, is there anything special I have to do to install Ubuntu on one?
I have two Dell PowerEdge 2650 servers. One is a dual Xeon 3GHz with 4GB of RAM (4 x 1GB sticks). The other is a single Xeon 2.4GHz with 3GB of RAM (2 x 512MB and 2 x 1Gb sticks).
Both are running 32 bit CentOS 5.3. Both are running the same kernel (vmlinuz-2.6.18-164.el5). I've even diff'ed the two kernels to ensure that they are in fact identical. The only parameters being passed to the kernel via the grub config are "ro root=/dev/sda2"
The BIOS shows 4GB detected on the 4GB server and 3Gb detected on the 3GB server on boot.
Dell OpenManage for the 4GB server shows "Total Installed Capacity 4096 MB, Available to the OS 3804 MB", as I would expect. However, OpenManage for the 3GB server shows "Total Installed Capacity 3072 MB, Available to the OS 1011 MB". (i.e. it is only seeing the first 1Gb of memory).
cat /proc/meminfo confirms the above; the 4GB server shows "MemTotal: 3895980 kB", but the 3GB server shows "MemTotal: 1035024 kB"
I installed the lshw utility, and on the 3GB server it shows all 4 memory sticks, the full 3GB:
*-memory description: System Memory physical id: 1000 slot: System board or motherboard
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
I was trying several times to install Centos 5.4 32bit via DVD or netinstall method. Installations stops at setp where is formating disk and always at 12%. No error messages only can't format logical volume and please reboot server. Also deleted and create fresh RAID 1 array, but not helping.
I am looking at getting a Dell Poweredge 850 Pent4 3.6GHz 64bit with two 160Gig SATA drives and I was wondering if anyone has installed Centos on one of these servers or has anyone had any issues installing Centos on these servers.
we have 22 servers Dell (20 Poweredge R210 + 2 Poweredge R310) with CentOS 5.5 x86_64 (kernel 2.6.18-194.26.1.el5). Half of these are on different rack. During the day, some server reboot without any reason and the messages log file had no evidence of what happened.
From /var/log/messages: ... Dec 8 14:10:02 hadoop011 snmpd: Received SNMP packet(s) from UDP: [172.27.1.21]:50825 Dec 8 14:10:02 hadoop011 snmpd: Connection from UDP: [172.27.1.21]:50825 Dec 8 14:15:26 hadoop011 syslogd 1.4.1: restart.
Some minutes after the reboot, messages log this error: Dec 8 14:18:14 hadoop011 Server Administrator: Instrumentation Service EventID: 1404 Memory device status is critical Memory device location: DIMM_A3 Possible memory module event cause:Multi bit error encountered.
Is it possible that 20% (circa) of 60 memory bank are broken? By the way, memory bank have Lifetime warranty from Kingston. If I reboot again the server affected by this error, the error disappear from messages log file and from OpenManage Web interface.
Is it possible that is a CentOS bug? I had search some relative bugs on Bugzilla but without any results.
I am trying to install ubuntu server 9.10 on two dell poweredge 2550 servers that I have at work and I can't get the install to complete on either. I saw some posts somewhere saying that it could be bad memory in the computers so I even put brand new memory in each, which needed upgraded anyway but the install still fails.The install usually hangs when trying to detect network drivers and it also hangs on some other file (I forget now). I remember I had an older version of linux on one of them before. The install says the file is corrupted on the cd. I have burned multiple cd's in different drives on different computers using the lowest speed on all of them, and all install attempts fail. I can even take the exact same cd's and install in vmware on my mac so I know for a fact the cd's are fine. What makes me more confused is that I can install windows server 2003 without a hitch on both the servers I am trying to install Ubuntu on.
I am trying to install ubuntu 8.04 general desktop flavour in dell poweredge 2650, but it is not working..doesnt ubuntu 8.04 supports installation in a dell poweredge 2650..if not so thn what can be done for that..any other latest version which does so??
A friend of mine gave me an old Dell PowerEdge SC 1430 Tower Server. It has three hard-drives in it, that are about 160 GB each. The system had a Promise FastTrak TX1430 Raid card in it, and I am not sure. Would that be a hardware RAID, or would you consider that a BIOS Raid?In any event, have the very latest OpenSuse 11.2 install DVD.I can boot from the DVD, do a fresh install and the system starts where I can sign on as the user. When I look at the starting parition, it has /dev/sda /dev/sdb and /dev/sdc ... each being a hard drive. These are just listed under the hard-drives, not under RAID.
The problem is that when I cold shut-down the machine, and restart from a power-up, the system comes up and cannot find the boot disk. So, Obviously I am doing something wrong.I am used to having 1 or 2 drives in a machine, just SATA drives, and no raid at all. I have no experience installing Linux on a machine with RAID, again not sure if this is BIOS or Hardware RAID.
FC11 i386 dvd iso image passes pgp verify. Burns fine and verifies fine under nero. Boots fine and verifies fine on my athlon system and installs just fine. Not one issue at all. 1374 pkgs install without issue. Take the same disc put it into my dell poweredge 2850 (dual xeon) and the disc fails to verify.
Even if I go ahead and try to install, it will boot and go through the menus. Then on package install at some point it will say one is corrupt and halts the install. Yet ubuntu 9.04 installs just fine on it. So I don't see how it could be the dvd player itself, though not 100% ruled out.
Has anyone successfully installed openSUSE 11.2 on a Dell PowerEdge 1850 system?Undoubtedly, it wasn't wise to attempt to upgrade from 10.2 to 11.2 but I was forced into it by a security audit of DMZ systems.It didn't go as smoothly as the jump from 9.3 to 10.2.I've spent the better part of a week getting an openSUSE 11.2 system installed.I went back to reiserfs and managed to get through and apply all the updates to openSUSE 11.2 today.When I got to the final reboot, it failed when I attempted to log into the system.Basically, the keyboard and mouse became inoperable.The only way out of this situation is to mash the reset button.
Rebooting into safe-mode and adding level 3 to the safe mode boot options, I get to the point of starting sshd when the lock-up occurs.It looks like X11 is the culprit.Booting a forensics DVD, I find that there is an xorg.conf.installation but no xorg.conf file.The ATI Radeon VE chip set is used on the PowerEdge.From another openSUSE system that I haven't upgraded, I see that X11 was configured for VESA mode. This is, probably, the mode that is needed as console access is, normally, through a KVM.Can anyone provide a way to get past this problem with a workable keyboard and mouse?
I'm working on a new server and it has an Nvidia SATA array controller with 2 250Gb SATA drives configured in a hardware array. When the first screen comes up I'm entering the option Linux DD for it to prompt me for the drivers but nothing ever happens. The screen says that it's loading a SATA driver for about 15 minutes and then the screen clears and has a plus sign cursor on a black screen. What am I doing wrong? The only driver that came with the HP server are for Redhat 4 and 5 and SUSE, will any of those actually work?
I am attempting to install CentOS 5 on a Dell poweredge 6400. The drives are configure for Raid 5 with about 67 gigs of space on the server. When I select any option under Partitioning of your hard drive, it gives me an error. " No Drives found - An error has occured - no valid devices were found on which to create new file systems. Please check your hardware for the cause of this problem."
I forced my workplace to forgo windows and opt for linux for web and mail server. I'm setting up Centos 5.4 on it and I ran into a problem. The server machine is a HP Proliant DL120 G5 (quad core processor, 4GB Ram, two SATA drives, 150GB each attached to the hardware RAID Controller on board). RAID is enabled in the BIOS.I pop in the Centos disk and go through the installation process.
When I get to the stage where I partition my hard drive,it is showing one hard drive, not as traditional sda.but as mapper/ddf1_4035305a86a354a45.I looked around and figured that I need to give Centos the raid drivers. I downloaded it from:
I follow the instructions and download the aarahci-1.4.17015-1.rhel5.i686.dd.gz file and unzipped it using gunzip. Then on another nix system, i do this:
dd if=aarahci-1.4.17015-1.rhel5.i686.dd of=/dev/sdb bs=1440k Note that I am using a usb floppy drive, hence the sdb. After that, during centos setup, i type: linux updates dd
It asks me where the driver is located. I tell it and the installation continues in the graphical mode. But I still get mapper/ddf1_4035305a86.a354a45 as my drive. I tried to continue to install centos on it. It was successfull but when i do a "df -h" it gives me /dev/mapper/ddf1_4035305a86......a354a45p1 as /boot
/dev/mapper/ddf1_4035305a86......a354a45p2 as / /dev/mapper/ddf1_4035305a86......a354a45p3 as /var /dev/mapper/ddf1_4035305a86......a354a45p4 as /external /dev/mapper/ddf1_4035305a86......a354a45p5 as /swap /dev/mapper/ddf1_4035305a86......a354a45p6 as /home
Well i know why it's giving these, because i set it up that way, but i was hoping it would somehow change to the normal /dev/sda, /dev/sdb. That means that the driver i provided did not work. I have another IBM server (5U) with raid scsi drive and it shows the usual /dev/sda. It also has hardware raid. So i know that there is something wrong with the /dev/mapper/ddf1_4035305a86......a354a45p1 format.
First, is there any way that I can put the aarahci-1.4.17015-1.rhel5.i686.dd (floppy image) on a CD?. I really need to set this up with raid. I know i could simply disable raid in bios and then i would get two normal hard drives sda and sdb. But it has to be a raid setup. Any way to slipstream the driver into the centos dvd? The hp link i provided above, under installation instructions, there are some instructions titled "Important". But I couldn't get it to work.
I cannot install 11.3 on a machine with an intel raid controller I have tried with raid 1 using the card and then setting the disks to individual raid 0 and letting suse raid them. With the card doing it the machines crashes as soon as it tries to boot the first time, with suse doing the raid I just get 'GRUB' on the screen It seems a lot of people are having similar problems, does any one have any pointers. 11.2 installs fine. I would try and do a bug report but every time I go to the page it's in Czech
I want to Install RHEL 4.7 64 bit on one my server (Supermicro Super Server) having RAID controller1. Intel2. AdaptecWe are using Adaptec.We are using RAID 1 with 2x320GB Hard disksPOINT: If we Install RHEL 5.3 it recognize RAID controller and show single Logical volume of 298 GB, Means working fine but when we try to install RHEL 4.7 it shows two hard disks of 298GB and 298GB,meanz its unable to recognize RAID controller.So, the issue is Driver of it, CD which we got from super micro having driver for RHEL 4 to RHEL 4 update 6We are making our DR site and its necessary for us to Install RHEL4.7 to make it identical.I searched a lot and spent more than three days on it continuously, And still unable to find the solution.
I am trying to install CentOS 5.5 on a older Dell PowerEdge 2450. Is there any reason the install would fail? It does load the initial install screen but when I hit enter, it simply goes to a blank screen with a blinking cursor ( I want to say that it says to me "What are you doing Dave?"), but I won't. It just sits there on this screen.
I need to add the LSI drivers for the 9750 RAID controller during the install. These drivers are not included in 10.04 (or 10.04.1) and I need to install onto the RAID device I've created. LSI provides the drivers and instructions here - [URL]
Here are my steps, with the drivers on a USB drive - Code: Boot from the installation CD and select Install Ubuntu Server. Press CTRL+ALT+F2 to switch to console 2 while Ubuntu detects the network. # mkdir /mnt2 /3ware # mount /dev/sda1 /mnt2 NOTE: LSI drivers are at /dev/sda1, via USB # cp /mnt2/9750-server.tgz /3ware # cd /3ware ; tar zxvf 9750-server.tgz # umount /mnt2
* Remove the USB flash before insmod command * # insmod /3ware/2.6.32-21-generic/3w-sas.ko
Press CTRL+ALT+F1 to return to the installer. Continue the installation as usual. Do not reboot when the installation is complete. Press CTRL+ALT+F2 to switch to console 2 again.
I want to stop using Windows because it sucks so i have downloaded all kind of distibutions from Linux. They give all the same error because it seems Linux has problems with Fakeraid. Now i have running OpenSuse in VmWare 7.0.1 but i want it as the only OS.
The installation goes fine but in the end it gives a Grub error because it cannot create the bootloader. It seems to be a common problem and i have done all the steps that i could find on Google.
I have two raid controllers. One is integrated in the mainboard from Asrock ALiveNF7G-HD720p R5.0 and OpenSuse sees it as a Jmicron controller.I have bought also a EM2001 2 Poorts PCI Controller SATA card with two harddisks in Raid 0 because Linux failed to install on the JMicron. On the EM2001 2 Poorts PCI Controller SATA it also fails with the same error.
I want OpenSuse 11.2 working on Raid 0. I know it must be some simple commands in the terminal through a live cd to correct the bootloader and do it manualy by Linux users but i'm a Windows user.
Can somewhone please tell me the exact steps and commands to install Linux on Raid 0 Fakeraid?
I want to create a file-server with Ubuntu and have two additional hard drives in a RAID 1 setup. Current Hardware: I purchased a RAID controller from [URL]... (Rosewill RC-201). I took an old machine with a 750GB hard drive (installed Ubuntu on this drive). I installed the Rosewill RAID card via PCI port. Connected two 1TB hard drives to the Rosewill raid card. Went into the RAID bios and configured it to RAID 1.
My Problem: When I boot into Ubuntu and go to the hard drive utility (I think that's what its called). I see the RAID controller present with two hard drives configured separately. I format and tried varies partition combination and at the end of the day I see two separate hard drives. Just for giggles, I also tried RAID 0 to see if they would combine the drives.
I have a Dell Poweredge, 6 months old, running Fedora core 8. I have 2GB RAM and 500Gb SATA HD. I am running 3 VMWare machines. Two Win2003 Servers and 1 XP Pro. My server crashes almost daily. It doesn't actually reboot, it just freezes. It got to the point where I had to get a device that automatically reboots the server when it can't ping the server anymore. /var/log/messages has no entries that indicates the server crashed. It just stops at the point where it dies. The windows VMWare server also have no errors in the event log to indicate a problem.