Server :: Software To Manage Hardware RAID Controller
Aug 16, 2011
I have IBM x3550 M3 machine with ServeRAID-M1015 on board. I set up RAID1 on machine and have question : 1. Is there some kind of software to manage hardware RAID from within operating system level? What I mean by saying "manage hardware RAID from oes level" is I would like to see status of disks in array, be able to initiate rebuild process and etc. This post doesn't relate software RAID only hardware RAID.
We have a LAN with mixed Windows workstations win 2000, winxp, vista, win 7, linux servers all in a workgroup. Most applications used on the LAN are windows based, with a growing number of python apps. A friend suggested a Primary Domain Controller would be a better way to manage logins, resources etc. I don't wont to use a Windows based PDC, what would you suggest as a linux based PDC? I have heard about TURNKEY PDC, but it uses Samba 3 and apparently doesn't handle Active Directory in Windows.
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
I want to Install RHEL 4.7 64 bit on one my server (Supermicro Super Server) having RAID controller1. Intel2. AdaptecWe are using Adaptec.We are using RAID 1 with 2x320GB Hard disksPOINT: If we Install RHEL 5.3 it recognize RAID controller and show single Logical volume of 298 GB, Means working fine but when we try to install RHEL 4.7 it shows two hard disks of 298GB and 298GB,meanz its unable to recognize RAID controller.So, the issue is Driver of it, CD which we got from super micro having driver for RHEL 4 to RHEL 4 update 6We are making our DR site and its necessary for us to Install RHEL4.7 to make it identical.I searched a lot and spent more than three days on it continuously, And still unable to find the solution.
I'm trying to resize an NTFS partition on an IBM MT7977 Server. It has a Adaptec AIC-9580W RAID controller. I was thinking about doing it with a gparted LiveCD/LiveUSB, but then I realised that they won't have drivers for the RAID controller. A quick google for "9580W Linux" doesn't return anything promising.
I want to create a file-server with Ubuntu and have two additional hard drives in a RAID 1 setup. Current Hardware: I purchased a RAID controller from [URL]... (Rosewill RC-201). I took an old machine with a 750GB hard drive (installed Ubuntu on this drive). I installed the Rosewill RAID card via PCI port. Connected two 1TB hard drives to the Rosewill raid card. Went into the RAID bios and configured it to RAID 1.
My Problem: When I boot into Ubuntu and go to the hard drive utility (I think that's what its called). I see the RAID controller present with two hard drives configured separately. I format and tried varies partition combination and at the end of the day I see two separate hard drives. Just for giggles, I also tried RAID 0 to see if they would combine the drives.
I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
I'm having a problem with the installation of Ubuntu Server 10.04 64bit on my IBM xSeries 346 Type 8840. During the installation the system won't recognize any disk, so it's asking which driver it should use for the RAID controller. There's a list of options, but nothing seems to work. I've been searching the IBM website for an appropriate driver, but there is no Ubuntu version (there is Red Hat, SUSE, etc). I was thinking about downloading the correct driver onto a floppy disk to finalize the installation, but apparently no 'general' Linux driver to solve the problem here.
I have been using lspci, dmidecode, and mpt-status to get hardware information on my Dell 1950 running Ubuntu 8.10. I'm pretty sure my server is using an embedded SCSI RAID controller from info I got from Dell's site:
PCI-Express is an actual...well, PCI card, right? But dmidecode shows that I have two x8 PCI Express slots that are both available. Sooo...I'm missing something. How am I running a PCI Express SCSI controller without using a PCI Express slot? In the event of not having the kind of info that I did (i.e. the service tag) how would I be able to tell at a glance whether a component like my RAID controller was embedded or not?
I have a VT6421 based raid controller. lspci shows this: 00:0a.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE RAID Controller (rev 50) The drivers that come with it appear to have been compiled against an old kernel (I'm guessing). When I try to load them I get invalid module format. dmesg shows this: viamraid: version magic '2.6.11-1.1369_FC4 686 REGPARM 4KSTACKS gcc-4.0' should be '22.214.171.124-smp SMP mod_unload Does anyone know of a way to get this to work? I found the source for this, but it appears to only support Fedora, Mandrak, and RedHat. I can't get it to compile or make a driver disk.
I'm working on a new server and it has an Nvidia SATA array controller with 2 250Gb SATA drives configured in a hardware array. When the first screen comes up I'm entering the option Linux DD for it to prompt me for the drivers but nothing ever happens. The screen says that it's loading a SATA driver for about 15 minutes and then the screen clears and has a plus sign cursor on a black screen. What am I doing wrong? The only driver that came with the HP server are for Redhat 4 and 5 and SUSE, will any of those actually work?
my setup P4 3.4GHZ 2GB Ram Gigabit Ethernet Drive Configuration 1 x 750GB Sata Connected To My Raid Controller in ide mode 1 x 120GB IDE HDD 1 x 250GB IDE HDD
my problem, I Am trying to install f12 and the only drive that it sees is the 750GB Sata,it is not seeing the other 2 ide drives The raid controller is an ite 8212 in ide mode The Bios Sees the drives just F12 doesnt
I shall start off by saying that I have just jumped from Windows 7 to Ubuntu and am not regretting the decision one bit. I am however stuck with a problem. I have spent a few hours google'ing this and have read some interesting articles (probably way beyond what the problem actually is) but still don't think I have found the answer.I have installed:
I am running the install on an old Dell 8400. My PC has an Intel RAID Controller built into the MB. I have 1 HDD (without RAID) (which is houses my OS install) and then I have 2 1TB drives (These are just NTFS formatted drives with binary files on them nothing more.) in a RAID 1 (Mirroring) Array. The Intel RAID Controller on Boot recognizes the Array as it always has (irrespective of which OS is installed) however, unlike Windows 7 (where I was able to install the Intel RAID controller driver) .Does anyone know of a resolution (which doesn't involve formatting and / or use of some other software RAID solution) - to get this working which my searches have not taken me too?
Why could there not be a 3-way or even 4-way RAID level 1 (mirror)? It seems every hardware (and at least the software I tested a few years ago) RAID controller only supports a 2-way mirror.I recently tried to configure a 3ware 9650SE RAID controller. I selected all 3 drives. Then RAID 1 was not presented as an option. Only RAID 0 (striping, no redundancy) and RAID 5 (one level of redundancy, low performance). Is there some engineer who thinks "triple redundancy is a waste, so I'm not going to let them do that"? Or is it a manager?
Mirror RAID should be simple, even when more than 2 drives are used. The data is simply written in parallel to all the drives in the mirror set, and read from one of the drives (with load balancing over parallel and/or read-ahead operations to improve performance, though some of this is in question, too).
I need to add the LSI drivers for the 9750 RAID controller during the install. These drivers are not included in 10.04 (or 10.04.1) and I need to install onto the RAID device I've created. LSI provides the drivers and instructions here - [URL]
Here are my steps, with the drivers on a USB drive - Code: Boot from the installation CD and select Install Ubuntu Server. Press CTRL+ALT+F2 to switch to console 2 while Ubuntu detects the network. # mkdir /mnt2 /3ware # mount /dev/sda1 /mnt2 NOTE: LSI drivers are at /dev/sda1, via USB # cp /mnt2/9750-server.tgz /3ware # cd /3ware ; tar zxvf 9750-server.tgz # umount /mnt2
* Remove the USB flash before insmod command * # insmod /3ware/2.6.32-21-generic/3w-sas.ko
Press CTRL+ALT+F1 to return to the installer. Continue the installation as usual. Do not reboot when the installation is complete. Press CTRL+ALT+F2 to switch to console 2 again.
I was trying to update the drive for my adaptec raid controller. Unfortunately, Adaptec only provides RPM packages. So I converted the package using alien. After dpkg install, I then tried using dkms to build the module:
Code: root@atulsatom# dkms add -m aacraid -v 126.96.36.199400 Adding driver was successful, but I got some error during the build Code: root@atulsatom# dkms build -m aacraid -v 188.8.131.52400 Kernel preparation unnecessary for this kernel. Skipping.
I have been thinking about upgrading my RAID setup from a pair of Intel X25s running the software based RAID included with ubuntu to four Intel X25s running off a PCIe based controller. The controller is an HP P400 which I know works great on ubuntu (I have other machines running it with SAS drives). My desktop has an Intel S5000XVNSATAR mainboard and I have an open PCIe v1.0 8x (physical) slot that is wired 4x. The controller is 8x and will fit fine. How much of a performance hit do you think I will see running this 8x controller in the slot wired 4x with four Intel x25s in a RAID 10? Will I have enough bandwidth with the 4x for those SSD's?
I bought a used server and it's in great working condition, but I've got a 2-part problem with the onboard raid controller. I don't have or use RAID and want to figure out how to stop or work around the onboard SATA raid controller. First some motherboard specs.: Arima HDAMA 40-CMO120-A800 [URL]... The 4-port integrated Serial ATA Integrated SiliconImage Sil3114 raid controller is the problem.
Problem 1: When I plug in my SATA server hard drive loaded with Slackware 12.2 and linux kernel 184.108.40.206, the onboard raid controller recognizes the one drive and allows the OS to boot. Slackware gets stuck looking for a raid array and stops at this point -........
I have DL120 Proliant server that has a P212 raid card, if I install Lenny it works fine however I need squeeze. If I upgrade or install a new version of squeeze the raid controller is no longer visible. I have done some snoopping and it seems as though the ciss drivers have been replaced by the hpsa drivers but I still cant seem to get the raid card recognised any body got any tips ?
I had a RAID1 'device' build on two physical partitions on two drives. One of the disk controllers died and software RAID did the job - now I am working on the degraded array.
Now I want to put the old disk (sdb) back, and I am not sure what will happen. Both disks have 'raid auto' partitions. And sdb file structure from before of the failure. The raid code will find inconsistency between both partitions. What will it decide? Will it start coping from the currently running system (sda) all the data to the old one (sdb) at the boot time, as I wish?
I don't want to it to write from the old one to the new one, as some months passed and lots of changes happened to the data.
I've recently started having an issue with an mdadm RAID 6 array that been operational for about 2500 hours.
Intermittently during write operations the array stalls, dropping to almost 0 write speed for 10-30 seconds. When this occur one or both of the 2 drives attached to a 2 port Silicon Image si3132 SATA-II controller "locks up" with its activity light locked on. This just started occurring within the last week and didn't seem to coincide with any update that i noticed. The array has just recently passed 12.5% full. The size of the write does not seem to make any difference and it seems completely random. Some times copying a 5 GB dataset results in no slow down other times a torrent downloading to the array at 50kb/sec does cause a slow down and vise versa.
The array consists of 8 WD 1.5TB drives, 6 attached to the ICH9R south bridge, and 2 attached to a si3132 based PCI express card. The array is formatted as a single ext4 partition.
Checking SMART data for all drives shows no errors. Testing read speed with hdparm reports what i would expect (100mb/sec for each drive, ~425mb/sec for the array).
The only thing i did notice is that udma6 is enabled for all the ICH9R drives while only udma5 is enabled for the si3132 drives. Write cache is enabled for all the disks. Attempting to set the si3132 drive to udma6 results in an IO error from hdparm.
The si3132 drive is using the sata_sil24 driver. Nothing of interest appears in the kern or syslog. During this time top shows very high wait time.
The s13132 controller appears to have the original firmware from 2006 loaded, there are some firmware updates available on the Silicon Image website for this controller that now appear to offer separate firmwares for RAID operation (some sort of hybrid controller/software thing the controller supports) and a separate firmware for standard IDE use.
Has anyone had similar issues with this controller? Is a firmware update a reasonable course of action? If so which firmware is best supported by the linux driver?
I know i'm not using its raid features but i've dealt with controllers that needed to be in raid mode for ahci to be active and for linux to work well with them. I'm bit ify at the idea of just trying it and finding out as it could knock 2 disks of my array out of action.
I am trying to install Centos onto a Poweredge 2650, with Dell Poweredge Expandable Raid Controller. I have 5 SCSI disks installed, and have created and initialised 2 logical volumes via the SCSI controller setup utility (Ctrl M after boot). After reboot the system reports two logical volumes present.
The Centos installer cannot find any disks when it gets to the disk partitioning/setup step. It reports no disks present. Do I need a specific driver for this controller?
I want to stop using Windows because it sucks so i have downloaded all kind of distibutions from Linux. They give all the same error because it seems Linux has problems with Fakeraid. Now i have running OpenSuse in VmWare 7.0.1 but i want it as the only OS.
The installation goes fine but in the end it gives a Grub error because it cannot create the bootloader. It seems to be a common problem and i have done all the steps that i could find on Google.
I have two raid controllers. One is integrated in the mainboard from Asrock ALiveNF7G-HD720p R5.0 and OpenSuse sees it as a Jmicron controller.I have bought also a EM2001 2 Poorts PCI Controller SATA card with two harddisks in Raid 0 because Linux failed to install on the JMicron. On the EM2001 2 Poorts PCI Controller SATA it also fails with the same error.
I want OpenSuse 11.2 working on Raid 0. I know it must be some simple commands in the terminal through a live cd to correct the bootloader and do it manualy by Linux users but i'm a Windows user.
Can somewhone please tell me the exact steps and commands to install Linux on Raid 0 Fakeraid?
I forced my workplace to forgo windows and opt for linux for web and mail server. I'm setting up Centos 5.4 on it and I ran into a problem. The server machine is a HP Proliant DL120 G5 (quad core processor, 4GB Ram, two SATA drives, 150GB each attached to the hardware RAID Controller on board). RAID is enabled in the BIOS.I pop in the Centos disk and go through the installation process.
When I get to the stage where I partition my hard drive,it is showing one hard drive, not as traditional sda.but as mapper/ddf1_4035305a86a354a45.I looked around and figured that I need to give Centos the raid drivers. I downloaded it from:
I follow the instructions and download the aarahci-1.4.17015-1.rhel5.i686.dd.gz file and unzipped it using gunzip. Then on another nix system, i do this:
dd if=aarahci-1.4.17015-1.rhel5.i686.dd of=/dev/sdb bs=1440k Note that I am using a usb floppy drive, hence the sdb. After that, during centos setup, i type: linux updates dd
It asks me where the driver is located. I tell it and the installation continues in the graphical mode. But I still get mapper/ddf1_4035305a86.a354a45 as my drive. I tried to continue to install centos on it. It was successfull but when i do a "df -h" it gives me /dev/mapper/ddf1_4035305a86......a354a45p1 as /boot
/dev/mapper/ddf1_4035305a86......a354a45p2 as / /dev/mapper/ddf1_4035305a86......a354a45p3 as /var /dev/mapper/ddf1_4035305a86......a354a45p4 as /external /dev/mapper/ddf1_4035305a86......a354a45p5 as /swap /dev/mapper/ddf1_4035305a86......a354a45p6 as /home
Well i know why it's giving these, because i set it up that way, but i was hoping it would somehow change to the normal /dev/sda, /dev/sdb. That means that the driver i provided did not work. I have another IBM server (5U) with raid scsi drive and it shows the usual /dev/sda. It also has hardware raid. So i know that there is something wrong with the /dev/mapper/ddf1_4035305a86......a354a45p1 format.
First, is there any way that I can put the aarahci-1.4.17015-1.rhel5.i686.dd (floppy image) on a CD?. I really need to set this up with raid. I know i could simply disable raid in bios and then i would get two normal hard drives sda and sdb. But it has to be a raid setup. Any way to slipstream the driver into the centos dvd? The hp link i provided above, under installation instructions, there are some instructions titled "Important". But I couldn't get it to work.
I cannot install 11.3 on a machine with an intel raid controller I have tried with raid 1 using the card and then setting the disks to individual raid 0 and letting suse raid them. With the card doing it the machines crashes as soon as it tries to boot the first time, with suse doing the raid I just get 'GRUB' on the screen It seems a lot of people are having similar problems, does any one have any pointers. 11.2 installs fine. I would try and do a bug report but every time I go to the page it's in Czech