Hardware :: Stop Or Work Around Onboard SATA Raid Controller
Jun 12, 2010
I bought a used server and it's in great working condition, but I've got a 2-part problem with the onboard raid controller. I don't have or use RAID and want to figure out how to stop or work around the onboard SATA raid controller. First some motherboard specs.: Arima HDAMA 40-CMO120-A800 [URL]... The 4-port integrated Serial ATA Integrated SiliconImage Sil3114 raid controller is the problem.
Problem 1: When I plug in my SATA server hard drive loaded with Slackware 12.2 and linux kernel 220.127.116.11, the onboard raid controller recognizes the one drive and allows the OS to boot. Slackware gets stuck looking for a raid array and stops at this point -........
I'm working on a new server and it has an Nvidia SATA array controller with 2 250Gb SATA drives configured in a hardware array. When the first screen comes up I'm entering the option Linux DD for it to prompt me for the drivers but nothing ever happens. The screen says that it's loading a SATA driver for about 15 minutes and then the screen clears and has a plus sign cursor on a black screen. What am I doing wrong? The only driver that came with the HP server are for Redhat 4 and 5 and SUSE, will any of those actually work?
I've recently started having an issue with an mdadm RAID 6 array that been operational for about 2500 hours.
Intermittently during write operations the array stalls, dropping to almost 0 write speed for 10-30 seconds. When this occur one or both of the 2 drives attached to a 2 port Silicon Image si3132 SATA-II controller "locks up" with its activity light locked on. This just started occurring within the last week and didn't seem to coincide with any update that i noticed. The array has just recently passed 12.5% full. The size of the write does not seem to make any difference and it seems completely random. Some times copying a 5 GB dataset results in no slow down other times a torrent downloading to the array at 50kb/sec does cause a slow down and vise versa.
The array consists of 8 WD 1.5TB drives, 6 attached to the ICH9R south bridge, and 2 attached to a si3132 based PCI express card. The array is formatted as a single ext4 partition.
Checking SMART data for all drives shows no errors. Testing read speed with hdparm reports what i would expect (100mb/sec for each drive, ~425mb/sec for the array).
The only thing i did notice is that udma6 is enabled for all the ICH9R drives while only udma5 is enabled for the si3132 drives. Write cache is enabled for all the disks. Attempting to set the si3132 drive to udma6 results in an IO error from hdparm.
The si3132 drive is using the sata_sil24 driver. Nothing of interest appears in the kern or syslog. During this time top shows very high wait time.
The s13132 controller appears to have the original firmware from 2006 loaded, there are some firmware updates available on the Silicon Image website for this controller that now appear to offer separate firmwares for RAID operation (some sort of hybrid controller/software thing the controller supports) and a separate firmware for standard IDE use.
Has anyone had similar issues with this controller? Is a firmware update a reasonable course of action? If so which firmware is best supported by the linux driver?
I know i'm not using its raid features but i've dealt with controllers that needed to be in raid mode for ahci to be active and for linux to work well with them. I'm bit ify at the idea of just trying it and finding out as it could knock 2 disks of my array out of action.
I want to stop using Windows because it sucks so i have downloaded all kind of distibutions from Linux. They give all the same error because it seems Linux has problems with Fakeraid. Now i have running OpenSuse in VmWare 7.0.1 but i want it as the only OS.
The installation goes fine but in the end it gives a Grub error because it cannot create the bootloader. It seems to be a common problem and i have done all the steps that i could find on Google.
I have two raid controllers. One is integrated in the mainboard from Asrock ALiveNF7G-HD720p R5.0 and OpenSuse sees it as a Jmicron controller.I have bought also a EM2001 2 Poorts PCI Controller SATA card with two harddisks in Raid 0 because Linux failed to install on the JMicron. On the EM2001 2 Poorts PCI Controller SATA it also fails with the same error.
I want OpenSuse 11.2 working on Raid 0. I know it must be some simple commands in the terminal through a live cd to correct the bootloader and do it manualy by Linux users but i'm a Windows user.
Can somewhone please tell me the exact steps and commands to install Linux on Raid 0 Fakeraid?
I forced my workplace to forgo windows and opt for linux for web and mail server. I'm setting up Centos 5.4 on it and I ran into a problem. The server machine is a HP Proliant DL120 G5 (quad core processor, 4GB Ram, two SATA drives, 150GB each attached to the hardware RAID Controller on board). RAID is enabled in the BIOS.I pop in the Centos disk and go through the installation process.
When I get to the stage where I partition my hard drive,it is showing one hard drive, not as traditional sda.but as mapper/ddf1_4035305a86a354a45.I looked around and figured that I need to give Centos the raid drivers. I downloaded it from:
I follow the instructions and download the aarahci-1.4.17015-1.rhel5.i686.dd.gz file and unzipped it using gunzip. Then on another nix system, i do this:
dd if=aarahci-1.4.17015-1.rhel5.i686.dd of=/dev/sdb bs=1440k Note that I am using a usb floppy drive, hence the sdb. After that, during centos setup, i type: linux updates dd
It asks me where the driver is located. I tell it and the installation continues in the graphical mode. But I still get mapper/ddf1_4035305a86.a354a45 as my drive. I tried to continue to install centos on it. It was successfull but when i do a "df -h" it gives me /dev/mapper/ddf1_4035305a86......a354a45p1 as /boot
/dev/mapper/ddf1_4035305a86......a354a45p2 as / /dev/mapper/ddf1_4035305a86......a354a45p3 as /var /dev/mapper/ddf1_4035305a86......a354a45p4 as /external /dev/mapper/ddf1_4035305a86......a354a45p5 as /swap /dev/mapper/ddf1_4035305a86......a354a45p6 as /home
Well i know why it's giving these, because i set it up that way, but i was hoping it would somehow change to the normal /dev/sda, /dev/sdb. That means that the driver i provided did not work. I have another IBM server (5U) with raid scsi drive and it shows the usual /dev/sda. It also has hardware raid. So i know that there is something wrong with the /dev/mapper/ddf1_4035305a86......a354a45p1 format.
First, is there any way that I can put the aarahci-1.4.17015-1.rhel5.i686.dd (floppy image) on a CD?. I really need to set this up with raid. I know i could simply disable raid in bios and then i would get two normal hard drives sda and sdb. But it has to be a raid setup. Any way to slipstream the driver into the centos dvd? The hp link i provided above, under installation instructions, there are some instructions titled "Important". But I couldn't get it to work.
I'm not all that familiar with what it is that the expensive SATA cards do, other than providing on-board cache and many ports, but all things considered what can't a CPU and lot's-o-RAM do that an expensive SATA card can?This assumes, of course, that you weren't planning on using the CPU to run other applications (i.e. a SAN type setup).
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
Now that I set up a 4x750GB OnBoard RAID 10, I'd like to install my 11.2 Suse (w/ GRUB dual boot)from BIOS, I've partitioned my HDD so I have:- Array1 = 30GB HTFS, 30GB XFS, 30GB Solaris, then extended part. for all (each OS) swap, temp, etc ...- Array2 = 30 GB HTFS (DMy Documents); 30 GB (/home); 30 GB (unformated) and then a big part for game install and VMware partition (Win98 for my old games, winXP32, etc ...)As a (sad) matter of fact, instal WinXP64 and/or Vista64 works perfectly. I see the array as partitioned by BIOS, and everything works.
But I can't have my 11.2 (64) SUSE installed on the drive, I have -for now- a VMware for it running ... on Windows :s(while I'd prefere the opposit)When I start w/ Suse11.2 DVD (downloaded), it says it can't install as there is NO HDD !!ok, fine, I plug back a PATA HDD, then I start the install, then I move the partition to the SATABUT, even so, I can't start the Suse ...I then lauch the repare/recovery on the install DVD, but it says that there is no root partition, no SUSE partition to fixe
I have a computer and I need to know whether its Sata-controller supports hotplugging. Therefore I (think I) need to know which Sata-controller is used in my computer. Can anyone tell me how I can find that piece of info?
ps. Some info on the environment: It is running an AMD Geode processor which only support IDE and therefore the board has a Pata to Sata converter build in. When I do 'lspci' it only shows the IDE connection: root@voyage:/etc# lspci
00:01.0 Host bridge: Advanced Micro Devices [AMD] CS5536 [Geode companion] Host Bridge (rev 33) 00:01.2 Entertainment encryption device: Advanced Micro Devices [AMD] Geode LX AES Security Block 00:06.0 Ethernet controller: VIA Technologies, Inc. VT6105M [Rhine-III] (rev 96) 00:07.0 Ethernet controller: VIA Technologies, Inc. VT6105M [Rhine-III] (rev 96) 00:08.0 Ethernet controller: VIA Technologies, Inc. VT6105M [Rhine-III] (rev 96) 00:09.0 Ethernet controller: VIA Technologies, Inc. VT6105M [Rhine-III] (rev 96)
I have been trying for a few days to install CentOS 5.4 on an IBM x306 and I cannot get it to properly handle the Adaptec Embedded SATA HostRAID Controller. I have been working with Linux for a few years, but this is new territory for me. I typically use Debian-based distros, but I did some research on the IBM site and found out that RHEL is a supported OS for this machine. So, I decided to give Cent a try. I have some experience with Fedora, so it's not totally foreign to me.
Anyway, I'm a bit confused. Using the IBM RAID utility, I set up a mirrored pair of 1GB SATA HDDs. When I run the Cent installer, it sees the pair as a single array. I am able to partition the array and complete the install, but when I boot into the OS, it sees the drives as 2 separate devices, sda and sdb. I can pull either one of the drives and boot with a single disk, but it doesn't seem to behave as a mirrored pair. If I make changes on sda, they are not replicated to sdb. Also, I can't use the cli or GParted to format the existing space on the array. I get an error either way. I believe this is because Cent doesn't have a driver for the RAID controller, but I don't see why it would work in the installer, but not the installed OS.
My next approach was to start over and attempt to run "linux dd" at the start of the installation. I tried to find the driver for the controller on the IBM site so I could load it when prompted, but couldn't find a newer version than RHEL 4 Update 3 (I'm assuming this would coordinate with CentOS 4.3). I tried it anyway, but when I select the floppy during the setup, it tells me it's not for this version of CentOS. I read several times that there are .img files that might help me in the 'Images' directory of disk one, but I only see diskboot.img, minstg2.img, and stage2.img. I don't think any of these are what I'm looking for. I thought there was supposed to be a drvblock.img or driverdisk.img.
A friend of mine bought a prebuilt system from a local computer store that has the Gigabyte GA-H55M-UD2H motherboard and he says he can boot Fedora off the CD but cannot install it as it doesn't detect the harddrive.
I assume it can't detect the SATA Controller as I am pretty sure hard drives use a universal driver so the only way for it not to detect the hard drive is if the controller is not detected.
I was wondering if anyone here has this motherboard and has this problem.
I think I gave him a Fedora 12 CD. (EDIT: It's Fedora 11 judging by the filename of the ISO I have on my Desktop...)
I myself haven't used Linux in a while since I completed my Linux course and this game I play needs Windows, but I am trying not to play that game as much anymore though because of free time issues... I have an Ubuntu Dualboot but have become used to Windows again. I am gonna try using Linux in a VM to get accustomed to it again, this way I can play with different versions/distros and still have the Windows 7 host as a fallback is something doesn't work...
On an Asus P5Q motherboard, CentOS 5.3 x64. (Separate thread as the other one has turned into just a discussion on my lm_sensors issue). The motherboard's primary SATA controller is the Intel P45 chipset. 6 ports, set to AHCI in BIOS, and all works a charm. Attached are 6 x SATA drives (2 x OS, 3 x data RAID5, 1 x backup). There is also a Marvell chip that provides PATA and also an underlying SATA chip (2 ports) that can do what Asus call Drive Xpert (stripe or mirror RAID) or be set to just standard SATA ports.
It's a SIL5723 behind what CentOS sees as "03:00.0 IDE interface: Marvell Technology Group Ltd. 88SE6121 SATA II Controller (rev b2)". Physically the Marvell has only a single optical drive attached to its PATA interface. CentOS has loaded marvell_pata (I assume for the PATA side of it) but I have no /dev/cdrom or /dev/dvd so no optical drive CentOS is trying to use 'ahci' for the Marvell's SATA ports. Dmesg tells me that when it 'sees' them as it boots:
scsi6 : ahci scsi7 : ahci scsi8 : ahci ata7: SATA max UDMA/133 abar m1024@0xfeaffc00 port 0xfeaffd00 irq 16 ata8: SATA max UDMA/133 abar m1024@0xfeaffc00 port 0xfeaffd80 irq 16 ata9: DUMMY
It attempts to read SATA drives attached to it, but it just errors trying to access them on boot (e.g. "ata7: failed to recover some devices, retrying in 5 secs" etc). I need the extra SATA ports, and I could really do with having a usable CDROM drive. I found a driver for the Marvell 88SE6121 from Asus. But the latest driver in that is for what it calls "RHEL 50.2" and the script fails with a "Generic Protection Error". That driver looks for kernel "2.6.18-92.el5" in its base_ver variable in the script. Changing the base ver to -128 doesn't help. I have uploaded the driver for if [URL]. Is that modifiable to work on 5.3 / soon to be 5.4?
I am using Dell Inspirion 1545 core 2 duo T6600, 3 gb ram,320 gb sata hddand in my bios i got 4 option (1) disable which disable sata and hdd is not detected (2) ATA where in my M$ is working(3) AHCI which mode will work. also i am using vm-ware to installed redhat linux 5.my hdd did not detected. do u need any information tell me i will post it on the same any command out put or log by which we can solve this problem.
I upgraded F14 to F15; However, F15 no longer recognizes my 3 sata disks connected through Marvell controller. The controller is an integrated Marvell chipset 88SE6480. The controller has its own manufacturer driver but it was intended for RHEL 5.4 (mv64xx) and installation of this driver fails to generate mv64xx driver.
I bought a Rampage III Extreme Black Edition wich has a SATA III controller (Marvell 88SE9182). I try to install Fedeora Core 15, but FC15 just dont recognize any of my HDs placed in SATA III ports. Has any fellow succeed installing linux on this SATA Controller?
I just finished a build of a new GNU/Linux boxen with openSUSE 11.2. I have a MSI Big Bang Xpower X58 motherboard which has two SATA controller chips, one is the standard Intel ICH10R chip for SATA 3.0 Gb/s and one is the Marvell 9128 chip for SATA 6.0 Gb/s. The BIOS recognizes the Western Digital Caviar Black 6.0 Gb/s drive on either SATA controller chips, /however/ I am unable to install (and boot) when the drive is connected to the Marvell controlled ports. As you can guess, I'd like to boot from the faster interface!
1. The BIOS allows me to select the Western Digital drive as a secondary boot device, so I know, at least at the BIOS level, it's there. This is true whether I have the drive connected to the Intel or Marvell ports. (The DVD drive is the primary boot device.)
2. When trying to install openSUSE 11.2 from DVD, the installer says that it can't find any hard drives on my system when I have the drive connected to the Marvell port. The installer finds the drive fine when it is connected to the Intel port.
3. I installed everything with the drive connected to the Intel port. I switched the drive to the Marvell port afterward and the system refuses to boot completely, stalling at some point where it starts to look for other filesystem partitions. This led me to conclude that perhaps the problem is with openSUSE and not hardware weirdness with the system having two separate SATA controllers?
I'm having a problem with the installation of Ubuntu Server 10.04 64bit on my IBM xSeries 346 Type 8840. During the installation the system won't recognize any disk, so it's asking which driver it should use for the RAID controller. There's a list of options, but nothing seems to work. I've been searching the IBM website for an appropriate driver, but there is no Ubuntu version (there is Red Hat, SUSE, etc). I was thinking about downloading the correct driver onto a floppy disk to finalize the installation, but apparently no 'general' Linux driver to solve the problem here.
I have been using lspci, dmidecode, and mpt-status to get hardware information on my Dell 1950 running Ubuntu 8.10. I'm pretty sure my server is using an embedded SCSI RAID controller from info I got from Dell's site:
PCI-Express is an actual...well, PCI card, right? But dmidecode shows that I have two x8 PCI Express slots that are both available. Sooo...I'm missing something. How am I running a PCI Express SCSI controller without using a PCI Express slot? In the event of not having the kind of info that I did (i.e. the service tag) how would I be able to tell at a glance whether a component like my RAID controller was embedded or not?
I have a VT6421 based raid controller. lspci shows this: 00:0a.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE RAID Controller (rev 50) The drivers that come with it appear to have been compiled against an old kernel (I'm guessing). When I try to load them I get invalid module format. dmesg shows this: viamraid: version magic '2.6.11-1.1369_FC4 686 REGPARM 4KSTACKS gcc-4.0' should be '18.104.22.168-smp SMP mod_unload Does anyone know of a way to get this to work? I found the source for this, but it appears to only support Fedora, Mandrak, and RedHat. I can't get it to compile or make a driver disk.
I installed Ubuntu as well but was still using the windows loader. I eventually installed Ubuntu Studio when I got another HDD (which is where I installed it). Don't ask me how I did it. As of now no loader automatically pops up. It acts as if there is nothing to boot from. However I can hit f12 and chose which HDD to boot to. On one it comes up with the Windows loader w/ Vista and Ubuntu. I can chose Vista and it works just fine. However if I chose Ubuntu it starts going a lil nuts and restarts. If I chose the other HDD it boots up Grub w/ options of Ubuntu and Ubuntu Studio. Studio works (which I'm currently on) and pretty much the same thing happens as before when I chose Ubuntu. So if the 2 OS's that I need work (Only really use windows for games) then what is the problem? I know its a little bit annoying trying to remember to hit f12 and having to chose your OS again after that (and of course the extremely annoying vista startup times). Some other things in both Studio and Vista don't want to work like they should. This is about the only thing left that I could think of that may have something to do with it. My wireless card+an old wireless card I found won't work w/ either.
Yet they both work on other computers running Vista and/or Ubuntu (not studio). I seem to always keep having display problems on both as well (dual monitors and graphics both). Even on studio I have to restart it sometimes to even get it to let me go fix it. My wireless controller receiver (for games) will stop working randomly and need to be unplugged and plugged back in. I also have problems with Studio recognizing SD cards and DVD/CD's. I am not sure much of anything can be done aside from wipe and fresh install on both. Bad thing is I don't have my other computer with me and can't afford to buy a 1TB HDD right now.
my setup P4 3.4GHZ 2GB Ram Gigabit Ethernet Drive Configuration 1 x 750GB Sata Connected To My Raid Controller in ide mode 1 x 120GB IDE HDD 1 x 250GB IDE HDD
my problem, I Am trying to install f12 and the only drive that it sees is the 750GB Sata,it is not seeing the other 2 ide drives The raid controller is an ite 8212 in ide mode The Bios Sees the drives just F12 doesnt
I shall start off by saying that I have just jumped from Windows 7 to Ubuntu and am not regretting the decision one bit. I am however stuck with a problem. I have spent a few hours google'ing this and have read some interesting articles (probably way beyond what the problem actually is) but still don't think I have found the answer.I have installed:
I am running the install on an old Dell 8400. My PC has an Intel RAID Controller built into the MB. I have 1 HDD (without RAID) (which is houses my OS install) and then I have 2 1TB drives (These are just NTFS formatted drives with binary files on them nothing more.) in a RAID 1 (Mirroring) Array. The Intel RAID Controller on Boot recognizes the Array as it always has (irrespective of which OS is installed) however, unlike Windows 7 (where I was able to install the Intel RAID controller driver) .Does anyone know of a resolution (which doesn't involve formatting and / or use of some other software RAID solution) - to get this working which my searches have not taken me too?
I want to Install RHEL 4.7 64 bit on one my server (Supermicro Super Server) having RAID controller1. Intel2. AdaptecWe are using Adaptec.We are using RAID 1 with 2x320GB Hard disksPOINT: If we Install RHEL 5.3 it recognize RAID controller and show single Logical volume of 298 GB, Means working fine but when we try to install RHEL 4.7 it shows two hard disks of 298GB and 298GB,meanz its unable to recognize RAID controller.So, the issue is Driver of it, CD which we got from super micro having driver for RHEL 4 to RHEL 4 update 6We are making our DR site and its necessary for us to Install RHEL4.7 to make it identical.I searched a lot and spent more than three days on it continuously, And still unable to find the solution.
Why could there not be a 3-way or even 4-way RAID level 1 (mirror)? It seems every hardware (and at least the software I tested a few years ago) RAID controller only supports a 2-way mirror.I recently tried to configure a 3ware 9650SE RAID controller. I selected all 3 drives. Then RAID 1 was not presented as an option. Only RAID 0 (striping, no redundancy) and RAID 5 (one level of redundancy, low performance). Is there some engineer who thinks "triple redundancy is a waste, so I'm not going to let them do that"? Or is it a manager?
Mirror RAID should be simple, even when more than 2 drives are used. The data is simply written in parallel to all the drives in the mirror set, and read from one of the drives (with load balancing over parallel and/or read-ahead operations to improve performance, though some of this is in question, too).