Fedora :: MDADM On 12 64bit - Error "mdadm: Cannot Add Disks To A 'member' Array, Perform This Operation On The Parent Container"
Nov 22, 2009
Here's a brief description of my system:
120GB Sata HDD - Primary OS drive
3 x 1.0TB Sata HDD - Raid 5 array
This is on a C2D MSI P35 Platinum board. Anyway, did a fresh install of F12 on the 120GB, which I had problems with - Anaconda refused to see the drive. Fedora Live could see it fine, and it was listed as an 'nvidia_raid_member' - no idea why, but I completely erased the disc under the Live CD and proceeded to install F12.
Once F12 was installed, I loaded up mdadm to re-activate my Raid 5 array, using 'sudo mdadm --assemble --uuidthe uuid) - and it started with only 2 of the 3 drives. My /dev/sdb drive did not activate into the array, due to what mdadm said was a mismatched UUID. Ok, so I erased /dev/sdb, intending to rebuild the array. Erased /dev/sdb, and then attempted 'sudo mdadm --add /dev/md0 /dev/sdb' and I get this error: "mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container" - I can find NO information on this error message.
[Code].....
I don't believe the hard drives are connected in the exact same order they were in before - I disconnected everything in the system and blew it out (it was pretty dusty)
View 1 Replies
ADVERTISEMENT
Sep 10, 2010
I have a 7-drive RAID array on my computer. Recently, my SATA PCI card died, and after going through multiple cards to find another one that worked with linux, I now can't assemble the array. The drives are no longer in the order they were in previously, and mdadm can't seem to reassemble the array. It says there are 2 drives and one spare, even though there were 7 drives and no spares. I know for a fact that none of the drives are corrupted, because one of the non-working RAID cards was still able to mount the array for a short period, but would loose the drives during resyncing (I later found out that the chipset on the card was had extremely limited linux support). I have tried running "mdadm --assemble --scan" and after the drive is partially assembled, I add the other drives with "mdadm --add /dev/md0 /dev/sdc1". These both return errors and will not complete on the new raid card.
Code:
aaron-desktop:~ aaron$ sudo mdadm --assemble /dev/md0
mdadm: /dev/md0 assembled from 2 drives and 1 spare - not enough to start the array.
[code]....
View 4 Replies
View Related
Jan 11, 2010
I am planning on setting up a 4x1TB RAID5 with mdadm under Ubuntu 9.10. I tried installing mdadm using "sudo apt-get install mdadm", all worked fine except for the following error: Code: Generating array device nodes... /var/lib/dpkg/info/mdadm.postinst: 170: /dev/MAKEDEV: not found failed. The end result is the /dev/md0 device has not been created, as can be seen here:
Code: windsok@beer:~$ mdadm --detail /dev/md0 mdadm: cannot open /dev/md0: No such file or directory After googling, I found the following bug which describes the issue: [URL] However it was reported way back in April 2009, and it does not look like it will be fixed any time soon, so I was wondering if anyone knows a workaround for this bug, to get me up and running?
View 4 Replies
View Related
Jan 21, 2011
when I start my raid5, only 2 disks of 3 are active on md0. The 3rd disk is inactive on md_d0.When I do mdadm --examine, the two active disks report 2 active, 2 working, 1 failed. the inactive disk resports 3 active, 3 working, 0 failed.
View 2 Replies
View Related
Jun 7, 2010
I just had a whole 2TB Software RAID 5 blow up on me. I rebooted my server, which i hardly ever do and low and behold i loose one of my raid 5 sets. It seems like two of the disks are not showing up properly.. What i mean by that is the OS picks up the disks, but it doesnt see the partitions.
I ran smartct -l on all the drives in question and they're all in good working order.
Is there some sort of repair tool i can use to scan the busted drives (since they're available) to fix any possible errors that might be present.
Here is what the "good" drive looks like when i use sfdisk:
Quote:
sudo sfdisk -l /dev/sda
Disk /dev/sda: 121601 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/sda1 0+ 121600 121601- 976760001 83 Linux
/dev/sda2 0 - 0 0 0 Empty
[Code]....
View 2 Replies
View Related
Mar 30, 2010
I tryed to install ubuntu 10.04 using the beta alternative install cd.
Everything went fine until the partitioning section.
I choose manual partitioning and all my existing partitions were detected correctly included my 2 mdadm raid0 arrays.
I choose md0 as my / partition and choose to format the partition
I choose md1 as my /home partition as choose to keep the data
When I choose to continue and write the changes to disk the install started to create an ext4 partition on md0, the installer then stopped with an error that the kernel could not reread the partition table.
I aborted the installation at this point.
Now I can not access either of my arrays.
I have booted a livecd and installed mdadm. When I checked /etc/mdadm/mdadm.conf my existing arrays were already listed.
Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
[Code].....
View 3 Replies
View Related
Jul 31, 2010
I had a raid array working great in 9.04 with mdadm, and I just recently upgraded to 10.04 (clean install) and I'm trying to reassemble the array and having a dickens of a time.When I try to recreate the array with:
Code:
sudo mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdd /dev/sdc
I get this:
[code]....
View 9 Replies
View Related
Apr 10, 2011
I have created a RAID 5 array using the built in Disk Utility.This is great and formatted it with ext4, and mounted it.However on reboot in Disk Utility as RAID Array is not running and under state Not running, partially assembled.I have to stop the array then restart it, then mount it before I can access what is on it.Obviously this is not very good as I often have the system shutdown at night to converse energy, and having to do this every time it boots is a pain.Could someone please explain in plain english what I need to go to get my array to start and mount on startup
View 4 Replies
View Related
May 21, 2011
trying to troubleshoot an issue i'm having with MDADM I have a raid 5 array consisting of 5 2tb Western Digital Green drives. It has been working fine for the last 6-7 months but recently has stopped working. After rebooting i get an error something like "unable to mount /mnt/storage" which is the filesystem on the raid array the raid array is /dev/md0 when i do a " sudo mdadm --assemble --scan" i get the error mdadm: /dev/md0 assembled from 3 drives - not enough to start the array all the drives are there and i can see the correct partition information if i load them up in parted.
View 1 Replies
View Related
Aug 14, 2010
I'm running a Debian homeserver, with a 3-disk (1GB each) raid 5 array using mdadm (the OS is on a separate disk).Now, smartmontools noticed some bad sectors on one of the disks, and I'm not sure what to do next (except for backup of valuable data).I found some articles on how to fix these sectors, but I'm unaware what the result on the whole array will be.
View 4 Replies
View Related
May 12, 2010
I'm looking to recover a RAID1 array hopefully using mdadm. Ive not really used Linux much befor but I'm keen to learn to get my data back. Basically one of the disks in my Maxtor Shared Storage II (2x500GB sata) died and I could do with either rebuilding the array or getting the data off another way.
I have a spare machine I could use for recovery process. It has a spare drive but its only 120Gig, I also have a bigger 320gig disk but thats IDE not SATA. Do I need to purchase another 500GB sata drive or can I use either of my spares? If i do need to buy a new drive could I use a 1TB or 1.5TB or will it have to be 500? Next question is what is that best version of linux to use, I have knoppix 6.2 and Ubuntu (not sure on version) already. I noticed that mdadm isn't installed by default on Ubuntu.
View 1 Replies
View Related
Dec 1, 2010
When I set up Ubuntu 10.10 I had only one hdd around so I installed my system with the idea that I will add the 2nd hdd for raid1 later on. Last weekend I wanted to add the hdd, but discovered, that ubuntu created a raid0 array. So I went on and tried different things: removing the 1st hdd from the raid0 array, create a raid1 with two disks, and so on... I finally could syncronize both disks but after a reboot the raid0 array appeared again with only one disk. Now I know, I should have written the mdadm.conf and fstab files... My last tries resulted in a missing superblock. Here is the story:
[Code].....
View 1 Replies
View Related
May 21, 2011
trying to troubleshoot an issue i'm having with MDADM.I have a raid 5 array consisting of 5 2tb Western Digital Green drives.It has been working fine for the last 6-7 months but recently has stopped working.After rebooting i get an error something like "unable to mount /mnt/storage" which is the filesystem on the raid array the raid array is /dev/md0.when i do a " sudo mdadm --assemble --scan" i get the error mdadm: /dev/md0 assembled from 3 drives - not enough to start the array.all the drives are there and i can see the correct partition information if i load them up in parted.i didnt get an emails or notification on if the drives failed, so i'm running a smart check on them now
View 1 Replies
View Related
Mar 2, 2011
a server that was running a hardware isw raid on the system (root) disk. This was working just fine until I started getting sector errors on one of the disks. So, I shutdown the system and removed the failing drive and installed a new drive (same size). On reboot I went in to the intel raid setup and it did show the new drive and I was able to set it to rebuild the raid. So, continuing the reboot everything came up just fine except the raid 1 on the system disk. I have tried many times to get the system to rebuild the raid using dmraid, but to no avail it would not start a rebuild. In order to get the system back up and make sure that the disk was duplicated I was able to 'dd' the working disk to the new disk that was installed.At present when I look at the system it does not show up with a raid setup on the system disk ( this comprises the entire 1TB disk with w partitions sda1 as / and sda2 as swap).Problem:I have decided to forego the intel raid and just use mdadm. I have a test system setup to duplicate (not the software, but the disk partitions) the server setup.
Code:
[root@kilchis etc]# fdisk -l
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
View 12 Replies
View Related
Feb 5, 2011
I am trying to create a new mdadm RAID 5 device /dev/md0 across three disks where such an array previously existed, but whenever I do it never recovers properly and tells me that I have a faulty spare in my array. More-specific details below. I recently installed Ubuntu Server 10.10 on a new box with the intent of using it as a NAS sorta-thing. I have 3 HDDs (2 TB each) and was hoping to use most of the available disk space as a RAID5 mdadm device (which gives me a bit less than 4TB.)
I configured /dev/md0 during OS installation across three partitions on the three disks - /dev/sda5, /dev/sdb5 and /dev/sdc5, which are all identical sizes. The OS, swap partition etc. are all on /dev/sda. Everything worked fine, and I was able to format the device as ext4 and mount it. Good so far.
Then I thought I should simulate a failure before I started keeping important stuff on the RAID array - no point having RAID 5 if it doesn't provide some redundancy that I actually know how to use, right? So I unplugged one of my drives, booted up, and was able to mount the device in a degraded state; test data I had put on there was still fine. Great. My trouble began when I plugged the third drive back in and re-booted. I re-added the removed drive to /dev/md0 and recovery began; things would look something like this:
Code:
user@guybrush:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc5[3] sdb5[1] sda5[0]
3779096448 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[Code]...
View 9 Replies
View Related
Jul 27, 2010
after a failed upgrade from 9.10 to 10.04 I had to format my computer and do a clean install of 10.04, and now my mdadm raid5 array wont start.my array is called "The Library", and i believe the space between "The" and "Library" is causing the command disk utility uses to start the array to fail.The exact error isAn error occurred while performing an operation on "The Library" (RAID-5 Array): The operation failed
Error assembling array: mdadm exited with exit code 1: mdadm: unrecognised word on ARRAY line: Library
mdadm: unrecognised word on ARRAY line: Library
[code]....
View 1 Replies
View Related
Dec 21, 2010
I have been having some odd issues over the last day or so while trying to get a raid 5 array running in software under Kubuntu. I installed 3 1TB drives and started up, my sd* order got all messed up( sda was now sdc and so on). This wasn't entirely unexpected, so I fixed up fstab and booted again. I found all three of the drives I installed, set them to raid auto-detect and used mdadm to create /dev/md0. I then created mdadm.conf by piping the output of mdadm --detail --scan --verbose into /etc/mdadm.conf.At this point, everything was still going swimmingly. I copied over a few hundred GB of data from another failing drive and everything seemed ok. I went to reboot once the copy was done and everything just went weird. All of the sd* drives went back to the original. Of course, this meant that the mdadm.conf was wrong. I tried to just change the device list, but that didn't work. I then deleted mdadm.conf and rebooted. The drive list stayed in the original order this time, so I just tried manually starting the array.
By erasing the partition table of the 3rd drive, I've been able to get it to the status of spare, but it says it is busy when I try to add it to the array. A grep through dmesg makes me think that md has a lock on it. I'm not sure where to go with it now. If anyone has any pointers, I would like to hear them.
Device List(original):
/dev/sda => boot drive, /home /
/dev/sdb => 1.5TB media storage, failing
[code]...
View 1 Replies
View Related
Jul 25, 2011
Long time lurker, still a Linux noob but Im learning I currently have a home media server setup with the following hardware specs:
MSI P45 motherboard
Intel Core2Quad Q6600
8GB DDR2 RAM
2x 250GB WD HDD in RAID1 via LVM (boot/swap etc)
8x2TB Hitachi HDD in RAID5 via mdadm (media/data)
The server mainly serves files for HTPCs around the house and runs a few VMs with VMWare server. I have recently picked up the following hardware which I�m thinking about upgrading to:
Gigabyte EX-58-Extreme motherboard
Intel i7 920
12GB DDR3 RAM
My main concern is will I be able to just swap the driving into the new system and everything can just pick up where it left off? More specifically, will mdadm be able to detect the 8x2TB drives attached to the new hardware and re-assemble the array?
My buddy that helped me set this system up isnt sure so I figured I ask here first, the boards do have the same ICH10R southbridge providing 6 of the SATA ports and 2 more will be run off of the extra controller onboard. I dont have a lot of Linux experience switching out core parts but in Windows Ive had great success moving things between various Intel chipsets and architectures from P965 -> P35 -> P45 -> H55 -> X58.
View 1 Replies
View Related
Nov 16, 2009
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1and I getmd1: raid array is not clean -- starting background reconstructionWhy is it not clean?Should I be worried?The HD is not new it has been used in before in a raid array but has beenrepartitionated.
View 2 Replies
View Related
Jun 7, 2011
I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it. then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says :Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error
mdadm: Not enough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:
root@warren-P5K-E:~# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
[code]...
View 11 Replies
View Related
Feb 3, 2011
When we assemble a raid array, from where does it load configuration information for that array? I thought it refers to /etc/mdadm.conf file, but in my system, mdadm.conf file doesn't even contain all information. Still it is able to successfully assemble previously created device.
# cat /etc/mdadm.conf
DEVICE /dev/sd[bcdjkl]1
DEVICE /dev/loop[012345]
[code]...
View 2 Replies
View Related
Feb 15, 2010
I have a problem with my mdadm RAID. I wanted to know if anyone had any experience with shrinking RAID5 arrays. I was growing the array from 5 to 6 devices however the grow got interrupted and it has recovered to 5 drives. The 6th drive is toast and I am unable to re add it to the system. I would like to drive the device listed as "removed". I have tried mdadm /dev/md0 --remove detached and failed with no success. I am running Ubuntu kernel 2.6.28-11 and mdadm is v3.1.1.
Here is output of a "mdadm -D dev/md0"
/dev/md0:
Version : 0.90
Creation Time : Wed Jan 12 00:46:41 2009
Raid Level : raid5
Array Size : 4883812480 (4657.57 GiB 5001.02 GB)
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Raid Devices : 6
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Feb 15 20:25:07 2010
State : active, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 74fa5199:84b88e81:4ae0fbae:92643084
Events : 0.1331010
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 0 3 active sync /dev/sda
4 8 64 4 active sync /dev/sde
5 0 0 5 removed
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb[0] sde[4] sda[3] sdd[2] sdc[1]
4883812480 blocks level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_]
unused devices: <none>
View 4 Replies
View Related
Jul 15, 2010
I've been having troubles with software raid. In particular, the raid array becomes un "assembleable" after reboots. The config is CentOS 5, 4 sata discs (one by 160 containing OS, no raid and 3 2TB disks configured as a RAID 5 array - no spare drive). These drives were configured in anaconda and all seemed to go well (the drive and its lvm partitions worked and it finished rebuilding overnight). A couple of reboots later the drives cannot be assembled anymore and the machine won't boot. The error message says:
mdadm: /dev/md0 assembled from 1 drive and 1 spare - not enough to start the array.
Of course there are 3 drives and no spares in the array as configured. Manually starting the array with mdadm --assemble --scan gives the same message as does assembling the drive by specifying the individual parts. /proc/mdstat does recognize the 3 drives and when I look at the partition tables in fdisk, they show as being software raid. What could be wrong or steps to diagnose? I tried configuring the raid drives manually before going the anaconda route. Also, does anyone know I can edit the /etc/fstab file to disable them so the machine will at least boot. The (Repair filesystem) shell has the / drive mounted r/o.
View 7 Replies
View Related
Mar 3, 2010
I have a 4 drive RAID 5 array set up using mdadm. The system is stored on a seperate physical disk outside of the array. When reading from the array its fast but when writing to the array its extremely slow, down to 20MB/Sec compared to 125MB/Sec reading. It does a bit then pauses, then writes a bit more and then pauses again and so on.The test i did was to copy a 5GB file from the RAID to another spare non-raid disk on the system average speed 126MB/s. Copying it back on to the RAID (in another folder) the speed was 20MB/s.The other thing is very slow several KB/s write speed copying from eSATA drive to the RAID.
View 9 Replies
View Related
Jul 18, 2011
Code:
$cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
[code]....
1/4 of my drives died after about 3 years of usage. I replaced it with an identical drive and did a mdadm -add to re-add it to the array. I expected this to take quite a long time, but not more than 1 million minutes to complete!
View 5 Replies
View Related
Sep 1, 2011
I've been using Ubuntu on my fileserver for quite a while now, and I've always really had this problem, but I want to finally address it and get it fixed. At seemingly random points (when my fileserver is under stress - typically while I'm writing lots of data to it), my fileserver will crash. It generally completely crashes, not responding to any further file requests or any of my SSH commands, and must be reset hard (typically by flipping the power switch). After such an occasion, I end up with some corrupted files. It seems to corrupt a large array of files (it's not an isolated issue - for example, it corrupts files that were not being accessed anywhere near the time it crashed, including files that had never been accessed during that period of uptime). The files don't get completely smashed, but they're definitely corrupted (artifacts in images, skips in audio and video files, often complete failure of binary files such as virtual hard drives or disc images).
I'm using Ubuntu Server 11.04, but similar issues to this happened for me in 10.04 LTS (in fact, I upgraded to try to solve them). I'm using mdadm to create an 8-drive raid6 array. The drives are 1.5 TB each, mostly Samsung HD154UI, but with a WD drive in there too (sorry, I can't find the model number at the moment). The hard drives themselves appear to be working fine - SMART reports no issues with any of them, mdadm says they're all up, and I have no reason to believe that the drives are at fault here (although I can conduct further tests if necessary). I've posted about this problem before here and here. In these cases, the issues seemed to be with XFS - in fact, I switched from XFS to ext4 on my RAID array because I simply believed XFS to be unstable. Unfortunately, this issue occurs with ext4 as well, so I'm fairly certain it's an mdadm issue. Here is the output of "cat /proc/mdstat", for those interested:
[Code]....
View 9 Replies
View Related
Jan 31, 2010
I am using mdadm 2.6.4 for managing RAIDs on Linux kernel 2.6.18. I've a query like whenever I tried to add a new disk to a running linear array(JBOD)i get a message "cannot add new disk to this array".
The exact steps are as follows:
create a new array as:
mdadm -C /dev/md0 -llinear -n2 /dev/sata/ /dev/sata2
It is getting added and i am able to see with -D command.
Now add a new disk sata3 as follows:
mdadm --grow /dev/md0 --add /dev/sata3 I get the output as:
md: sdb has invalid sb, not importing!
md: md_import_device returned -22
mdadm: cannot add new disk to this array.
So my first doubt is whether mdadm 2.6.4 supports this features or not if it supports then do I need to change the driver?
View 3 Replies
View Related
Aug 7, 2011
I'm convinced that mdadm is going to be the death of me. I've wasted numerous hours on this so far without luck.
OpenSuse 11.4 on an old Supermicro box, creating a software RAID1 array across 2 x IDE 500GB disks. Creating /dev/md0 as a 250MB partition across /dev/sda1 and /dev/sdd1 for /boot, another 465GB partition across /dev/sda2 and /dev/sdd2 as an LVM partition to hold volumes for the various other OS filesystems. After the initial installation and configuration there were a series of mishaps with faulty IDE cables that had drives failing to show up at boot. Somehow, /dev/sdd2 got configured to array /dev/md1 as a spare drive. And nothing I've done so far gets it to show up as an active drive.
The obvious step of failing the partition, removing it, then adding (or re-adding) will bring it back as a spare. I've tried roughly a dozen different permutations of those same steps. The latest was to 'dd if=/dev/zero of=/dev/sdd2' to clear the partition. Thought this might be the trick - after the zero, mdadm -E /dev/sdd2 reported 'no superblock' and no md1 configuration.
So 'mdadm --add /dev/md1 /dev/sdd2' and it still comes back as a spare. Here is mdadm -D /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Sat Jul 9 10:26:01 2011
Raid Level : raid1
Array Size : 488119160 (465.51 GiB 499.83 GB)
code....
I can't stop this array, the OS is running from there. I can't easily boot from CD to repair, all IDE ports have disks attached.
Does anyone have an incantation to promote a spare to active?
View 2 Replies
View Related
Mar 12, 2010
I've recently started having an issue with an mdadm RAID 6 array that been operational for about 2500 hours.
Intermittently during write operations the array stalls, dropping to almost 0 write speed for 10-30 seconds. When this occur one or both of the 2 drives attached to a 2 port Silicon Image si3132 SATA-II controller "locks up" with its activity light locked on. This just started occurring within the last week and didn't seem to coincide with any update that i noticed. The array has just recently passed 12.5% full. The size of the write does not seem to make any difference and it seems completely random. Some times copying a 5 GB dataset results in no slow down other times a torrent downloading to the array at 50kb/sec does cause a slow down and vise versa.
The array consists of 8 WD 1.5TB drives, 6 attached to the ICH9R south bridge, and 2 attached to a si3132 based PCI express card. The array is formatted as a single ext4 partition.
Checking SMART data for all drives shows no errors. Testing read speed with hdparm reports what i would expect (100mb/sec for each drive, ~425mb/sec for the array).
The only thing i did notice is that udma6 is enabled for all the ICH9R drives while only udma5 is enabled for the si3132 drives. Write cache is enabled for all the disks. Attempting to set the si3132 drive to udma6 results in an IO error from hdparm.
The si3132 drive is using the sata_sil24 driver. Nothing of interest appears in the kern or syslog. During this time top shows very high wait time.
The s13132 controller appears to have the original firmware from 2006 loaded, there are some firmware updates available on the Silicon Image website for this controller that now appear to offer separate firmwares for RAID operation (some sort of hybrid controller/software thing the controller supports) and a separate firmware for standard IDE use.
Has anyone had similar issues with this controller? Is a firmware update a reasonable course of action? If so which firmware is best supported by the linux driver?
I know i'm not using its raid features but i've dealt with controllers that needed to be in raid mode for ahci to be active and for linux to work well with them. I'm bit ify at the idea of just trying it and finding out as it could knock 2 disks of my array out of action.
View 2 Replies
View Related
May 24, 2011
I'm looking to shrink my windows partition on a raid0 array and create a mdadm ubuntu partition using raid0. Is this possible? can I just ignore the /dev/mapper device and use the standard /dev/sdx devices?
View 4 Replies
View Related