CentOS 5 Hardware :: Connect A RAID Box To The Server Via LSI 8880EM2 RAID Controller
Aug 3, 2010
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
my setup P4 3.4GHZ 2GB Ram Gigabit Ethernet Drive Configuration 1 x 750GB Sata Connected To My Raid Controller in ide mode 1 x 120GB IDE HDD 1 x 250GB IDE HDD
my problem, I Am trying to install f12 and the only drive that it sees is the 750GB Sata,it is not seeing the other 2 ide drives The raid controller is an ite 8212 in ide mode The Bios Sees the drives just F12 doesnt
I'm working on a new server and it has an Nvidia SATA array controller with 2 250Gb SATA drives configured in a hardware array. When the first screen comes up I'm entering the option Linux DD for it to prompt me for the drivers but nothing ever happens. The screen says that it's loading a SATA driver for about 15 minutes and then the screen clears and has a plus sign cursor on a black screen. What am I doing wrong? The only driver that came with the HP server are for Redhat 4 and 5 and SUSE, will any of those actually work?
I am trying to install Centos onto a Poweredge 2650, with Dell Poweredge Expandable Raid Controller. I have 5 SCSI disks installed, and have created and initialised 2 logical volumes via the SCSI controller setup utility (Ctrl M after boot). After reboot the system reports two logical volumes present.
The Centos installer cannot find any disks when it gets to the disk partitioning/setup step. It reports no disks present. Do I need a specific driver for this controller?
I want to Install RHEL 4.7 64 bit on one my server (Supermicro Super Server) having RAID controller1. Intel2. AdaptecWe are using Adaptec.We are using RAID 1 with 2x320GB Hard disksPOINT: If we Install RHEL 5.3 it recognize RAID controller and show single Logical volume of 298 GB, Means working fine but when we try to install RHEL 4.7 it shows two hard disks of 298GB and 298GB,meanz its unable to recognize RAID controller.So, the issue is Driver of it, CD which we got from super micro having driver for RHEL 4 to RHEL 4 update 6We are making our DR site and its necessary for us to Install RHEL4.7 to make it identical.I searched a lot and spent more than three days on it continuously, And still unable to find the solution.
I have IBM x3550 M3 machine with ServeRAID-M1015 on board. I set up RAID1 on machine and have question : 1. Is there some kind of software to manage hardware RAID from within operating system level? What I mean by saying "manage hardware RAID from oes level" is I would like to see status of disks in array, be able to initiate rebuild process and etc. This post doesn't relate software RAID only hardware RAID.
I forced my workplace to forgo windows and opt for linux for web and mail server. I'm setting up Centos 5.4 on it and I ran into a problem. The server machine is a HP Proliant DL120 G5 (quad core processor, 4GB Ram, two SATA drives, 150GB each attached to the hardware RAID Controller on board). RAID is enabled in the BIOS.I pop in the Centos disk and go through the installation process.
When I get to the stage where I partition my hard drive,it is showing one hard drive, not as traditional sda.but as mapper/ddf1_4035305a86a354a45.I looked around and figured that I need to give Centos the raid drivers. I downloaded it from:
[URL]
I follow the instructions and download the aarahci-1.4.17015-1.rhel5.i686.dd.gz file and unzipped it using gunzip. Then on another nix system, i do this:
dd if=aarahci-1.4.17015-1.rhel5.i686.dd of=/dev/sdb bs=1440k Note that I am using a usb floppy drive, hence the sdb. After that, during centos setup, i type: linux updates dd
It asks me where the driver is located. I tell it and the installation continues in the graphical mode. But I still get mapper/ddf1_4035305a86.a354a45 as my drive. I tried to continue to install centos on it. It was successfull but when i do a "df -h" it gives me /dev/mapper/ddf1_4035305a86......a354a45p1 as /boot
/dev/mapper/ddf1_4035305a86......a354a45p2 as / /dev/mapper/ddf1_4035305a86......a354a45p3 as /var /dev/mapper/ddf1_4035305a86......a354a45p4 as /external /dev/mapper/ddf1_4035305a86......a354a45p5 as /swap /dev/mapper/ddf1_4035305a86......a354a45p6 as /home
Well i know why it's giving these, because i set it up that way, but i was hoping it would somehow change to the normal /dev/sda, /dev/sdb. That means that the driver i provided did not work. I have another IBM server (5U) with raid scsi drive and it shows the usual /dev/sda. It also has hardware raid. So i know that there is something wrong with the /dev/mapper/ddf1_4035305a86......a354a45p1 format.
First, is there any way that I can put the aarahci-1.4.17015-1.rhel5.i686.dd (floppy image) on a CD?. I really need to set this up with raid. I know i could simply disable raid in bios and then i would get two normal hard drives sda and sdb. But it has to be a raid setup. Any way to slipstream the driver into the centos dvd? The hp link i provided above, under installation instructions, there are some instructions titled "Important". But I couldn't get it to work.
I'm trying to resize an NTFS partition on an IBM MT7977 Server. It has a Adaptec AIC-9580W RAID controller. I was thinking about doing it with a gparted LiveCD/LiveUSB, but then I realised that they won't have drivers for the RAID controller. A quick google for "9580W Linux" doesn't return anything promising.
I want to create a file-server with Ubuntu and have two additional hard drives in a RAID 1 setup. Current Hardware: I purchased a RAID controller from [URL]... (Rosewill RC-201). I took an old machine with a 750GB hard drive (installed Ubuntu on this drive). I installed the Rosewill RAID card via PCI port. Connected two 1TB hard drives to the Rosewill raid card. Went into the RAID bios and configured it to RAID 1.
My Problem: When I boot into Ubuntu and go to the hard drive utility (I think that's what its called). I see the RAID controller present with two hard drives configured separately. I format and tried varies partition combination and at the end of the day I see two separate hard drives. Just for giggles, I also tried RAID 0 to see if they would combine the drives.
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
How long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?
I am going to be using CentOs 5.4 for a home storage server. It will be RAID6 on 6 x 1TB drives. I plan on using an external enclosure which is connected via two SFF-8088 cables (4 drives a piece). I am looking to try and find a non-RAID HBA which would support this external enclosure and allow to use standard linux software raid.
If this is not an option, I'd consider using a hardware based raid card, but they are very expensive. The Adaptec 5085 is one option but is almost $800. If that is what I need for this thing to be solid then that is fine, I will spend the money but I am thinking that software raid may be the way to go.
I'm having a problem with the installation of Ubuntu Server 10.04 64bit on my IBM xSeries 346 Type 8840. During the installation the system won't recognize any disk, so it's asking which driver it should use for the RAID controller. There's a list of options, but nothing seems to work. I've been searching the IBM website for an appropriate driver, but there is no Ubuntu version (there is Red Hat, SUSE, etc). I was thinking about downloading the correct driver onto a floppy disk to finalize the installation, but apparently no 'general' Linux driver to solve the problem here.
I have been using lspci, dmidecode, and mpt-status to get hardware information on my Dell 1950 running Ubuntu 8.10. I'm pretty sure my server is using an embedded SCSI RAID controller from info I got from Dell's site:
PCI-Express is an actual...well, PCI card, right? But dmidecode shows that I have two x8 PCI Express slots that are both available. Sooo...I'm missing something. How am I running a PCI Express SCSI controller without using a PCI Express slot? In the event of not having the kind of info that I did (i.e. the service tag) how would I be able to tell at a glance whether a component like my RAID controller was embedded or not?
I have a VT6421 based raid controller. lspci shows this: 00:0a.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE RAID Controller (rev 50) The drivers that come with it appear to have been compiled against an old kernel (I'm guessing). When I try to load them I get invalid module format. dmesg shows this: viamraid: version magic '2.6.11-1.1369_FC4 686 REGPARM 4KSTACKS gcc-4.0' should be '2.6.27.7-smp SMP mod_unload Does anyone know of a way to get this to work? I found the source for this, but it appears to only support Fedora, Mandrak, and RedHat. I can't get it to compile or make a driver disk.
I am trying to install FC 10 on my built-in Adaptec raid controller and the Live CD does not see the controller.From my other ATA drive FC10 install I get this from lspci -v:
Code:
00:1f.2 RAID bus controller: Intel Corporation 6300ESB SATA RAID Controller (rev 02) (prog-if 8f) Subsystem: Super Micro Computer Inc Device 5180 Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 18 I/O ports at e200 [size=8]
[code]...
sdb and sdc are the SATA raid disks... they have already been partitioned as a single raid 0 device... not sure why / how they stil show up as two disks.
I shall start off by saying that I have just jumped from Windows 7 to Ubuntu and am not regretting the decision one bit. I am however stuck with a problem. I have spent a few hours google'ing this and have read some interesting articles (probably way beyond what the problem actually is) but still don't think I have found the answer.I have installed:
I am running the install on an old Dell 8400. My PC has an Intel RAID Controller built into the MB. I have 1 HDD (without RAID) (which is houses my OS install) and then I have 2 1TB drives (These are just NTFS formatted drives with binary files on them nothing more.) in a RAID 1 (Mirroring) Array. The Intel RAID Controller on Boot recognizes the Array as it always has (irrespective of which OS is installed) however, unlike Windows 7 (where I was able to install the Intel RAID controller driver) .Does anyone know of a resolution (which doesn't involve formatting and / or use of some other software RAID solution) - to get this working which my searches have not taken me too?
Why could there not be a 3-way or even 4-way RAID level 1 (mirror)? It seems every hardware (and at least the software I tested a few years ago) RAID controller only supports a 2-way mirror.I recently tried to configure a 3ware 9650SE RAID controller. I selected all 3 drives. Then RAID 1 was not presented as an option. Only RAID 0 (striping, no redundancy) and RAID 5 (one level of redundancy, low performance). Is there some engineer who thinks "triple redundancy is a waste, so I'm not going to let them do that"? Or is it a manager?
Mirror RAID should be simple, even when more than 2 drives are used. The data is simply written in parallel to all the drives in the mirror set, and read from one of the drives (with load balancing over parallel and/or read-ahead operations to improve performance, though some of this is in question, too).
I am looking to convert a raid 1 server I have to raid 10. It is using software raid, currently I have 3 drives in raid 1. Is it possible to boot into centos rescue, stop the raid 1 array. Then create the raid 10 with 4 drives, 3 of which still has the raid 1 metadata, will mdadm be able to figure it out and resync properly keeping my data? or is there a better way to do it?
I have a box that doesn't have a Raid controller or a software raid running currently. I would like to make it a RAID 1. Since it seems there isn't any IDE RAID controllers hardly around, I have another HD that is the exact model as the drive currently in the box running CentOS. Can I some how add the second drive and get the box to mirror from here own out? The box gets really hot and I want to be ready for a HD failure.
I'm working on a server and noticed that the to RAID5 setup is showing 4 Raid devices but only 3 Total devices. It's on a fully updated CentOS 5 system that only has three SATA drives, as it can not hold anymore. I've done some researching but am unable to remove the fourth device, which is listed as removed. The full output of `mdadm -D /dev/md2` can be see below. I've never run into this situation before.Anyone have any pointers on how I can reduced the Raid Devices from 4 to 3? I have tried
I need to add the LSI drivers for the 9750 RAID controller during the install. These drivers are not included in 10.04 (or 10.04.1) and I need to install onto the RAID device I've created. LSI provides the drivers and instructions here - [URL]
Here are my steps, with the drivers on a USB drive - Code: Boot from the installation CD and select Install Ubuntu Server. Press CTRL+ALT+F2 to switch to console 2 while Ubuntu detects the network. # mkdir /mnt2 /3ware # mount /dev/sda1 /mnt2 NOTE: LSI drivers are at /dev/sda1, via USB # cp /mnt2/9750-server.tgz /3ware # cd /3ware ; tar zxvf 9750-server.tgz # umount /mnt2
* Remove the USB flash before insmod command * # insmod /3ware/2.6.32-21-generic/3w-sas.ko
Press CTRL+ALT+F1 to return to the installer. Continue the installation as usual. Do not reboot when the installation is complete. Press CTRL+ALT+F2 to switch to console 2 again.
Press CTRL+ALT+F1 to return to the installer. Reboot to complete the installation. There are no errors, but after I reboot I just get "GRUB" in the upper left corner, nothing else.
I was trying to update the drive for my adaptec raid controller. Unfortunately, Adaptec only provides RPM packages. So I converted the package using alien. After dpkg install, I then tried using dkms to build the module:
Code: root@atulsatom# dkms add -m aacraid -v 1.1.5.26400 Adding driver was successful, but I got some error during the build Code: root@atulsatom# dkms build -m aacraid -v 1.1.5.26400 Kernel preparation unnecessary for this kernel. Skipping.
I have been thinking about upgrading my RAID setup from a pair of Intel X25s running the software based RAID included with ubuntu to four Intel X25s running off a PCIe based controller. The controller is an HP P400 which I know works great on ubuntu (I have other machines running it with SAS drives). My desktop has an Intel S5000XVNSATAR mainboard and I have an open PCIe v1.0 8x (physical) slot that is wired 4x. The controller is 8x and will fit fine. How much of a performance hit do you think I will see running this 8x controller in the slot wired 4x with four Intel x25s in a RAID 10? Will I have enough bandwidth with the 4x for those SSD's?
I bought a used server and it's in great working condition, but I've got a 2-part problem with the onboard raid controller. I don't have or use RAID and want to figure out how to stop or work around the onboard SATA raid controller. First some motherboard specs.: Arima HDAMA 40-CMO120-A800 [URL]... The 4-port integrated Serial ATA Integrated SiliconImage Sil3114 raid controller is the problem.
Problem 1: When I plug in my SATA server hard drive loaded with Slackware 12.2 and linux kernel 2.6.30.4, the onboard raid controller recognizes the one drive and allows the OS to boot. Slackware gets stuck looking for a raid array and stops at this point -........
I have centos 5.3. I have two h/d 1st h/d used as primary and backup of reqd things of first h/d is copied into 2nd h/d. I want to configure mirroring in this server how to configure this, raid 1 is ok or not.
I looking to setup a CentOS server with RAID 5 i was wondering what the best way to set it up and How with the ability to add more HDD to the RAID system later on if needed?
I have DL120 Proliant server that has a P212 raid card, if I install Lenny it works fine however I need squeeze. If I upgrade or install a new version of squeeze the raid controller is no longer visible. I have done some snoopping and it seems as though the ciss drivers have been replaced by the hpsa drivers but I still cant seem to get the raid card recognised any body got any tips ?