Ubuntu Servers :: Mdadm RAID 6 Array With Si 3132 SATA Controller ?

Mar 12, 2010

I've recently started having an issue with an mdadm RAID 6 array that been operational for about 2500 hours.

Intermittently during write operations the array stalls, dropping to almost 0 write speed for 10-30 seconds. When this occur one or both of the 2 drives attached to a 2 port Silicon Image si3132 SATA-II controller "locks up" with its activity light locked on. This just started occurring within the last week and didn't seem to coincide with any update that i noticed. The array has just recently passed 12.5% full. The size of the write does not seem to make any difference and it seems completely random. Some times copying a 5 GB dataset results in no slow down other times a torrent downloading to the array at 50kb/sec does cause a slow down and vise versa.

The array consists of 8 WD 1.5TB drives, 6 attached to the ICH9R south bridge, and 2 attached to a si3132 based PCI express card. The array is formatted as a single ext4 partition.

Checking SMART data for all drives shows no errors. Testing read speed with hdparm reports what i would expect (100mb/sec for each drive, ~425mb/sec for the array).

The only thing i did notice is that udma6 is enabled for all the ICH9R drives while only udma5 is enabled for the si3132 drives. Write cache is enabled for all the disks. Attempting to set the si3132 drive to udma6 results in an IO error from hdparm.

The si3132 drive is using the sata_sil24 driver. Nothing of interest appears in the kern or syslog. During this time top shows very high wait time.

The s13132 controller appears to have the original firmware from 2006 loaded, there are some firmware updates available on the Silicon Image website for this controller that now appear to offer separate firmwares for RAID operation (some sort of hybrid controller/software thing the controller supports) and a separate firmware for standard IDE use.

Has anyone had similar issues with this controller? Is a firmware update a reasonable course of action? If so which firmware is best supported by the linux driver?

I know i'm not using its raid features but i've dealt with controllers that needed to be in raid mode for ahci to be active and for linux to work well with them. I'm bit ify at the idea of just trying it and finding out as it could knock 2 disks of my array out of action.

View 2 Replies


ADVERTISEMENT

Ubuntu Servers :: Mdadm - Why /dev/sdb1 And /dev/sdi1 Show As Both Ext2fs And Also As Part Of A RAID Array

May 31, 2011

I've been having some problems w/ a my RAID 5 array, and after extensive investigation, I'm fairly sure that my last resort is rebuilding the array. I'd tried --assemble, b/c it's a previously created array, but it didn't seem to like that. So, I checked into --create, and it will re-create the array w/out destroying the data, if the superblocks are persistent, which they seem to be. However, here's what I get:

[Code]....

My question is: why do /dev/sdb1 and /dev/sdi1 show as both ext2fs and also as part of a RAID array?

View 3 Replies View Related

Ubuntu :: Random SATA Drive Is Always Busy / In Use (RAID 5 / Mdadm)

Dec 3, 2010

I have 4 SATA's in a RAID 5 array using mdadm. Yesterday when I started the computer the RAID did not build/mount. When trying to load the array manually I get the message "mdadm: cannot open device /dev/sd(a,b,c,d)1: Device or resource busy" The drives should not be mounted or in use. The output of the drives in mdadm (mdadm --examine /dev/sd_1) looks normal.

The weirdest part is that rebooting often changes which drive is marked as busy, it can be any of the 4 SATA drives. how to figure out why/what is being used and how to disable it? I have tried searching for similar threads here and in google and haven't found anything similar or that worked.

View 3 Replies View Related

Ubuntu :: Random SATA Drive Is Always Busy - In Use - RAID 5 - Mdadm

Dec 3, 2010

I have 4 SATA's in a RAID 5 array using mdadm. Yesterday when I started the computer the RAID did not build/mount.

When trying to load the array manually I get the message "mdadm: cannot open device /dev/sd(a,b,c,d)1: Device or resource busy" The drives should not be mounted or in use.

The output of the affected drive in mdadm (mdadm --examine /dev/sd_1) looks normal.

The weirdest part is that rebooting often changes which drive is marked as busy, it can be any of the 4 SATA drives. What is being used and how to disable it?

View 9 Replies View Related

General :: Bad Sectors On Mdadm Raid 5 Array?

Aug 14, 2010

I'm running a Debian homeserver, with a 3-disk (1GB each) raid 5 array using mdadm (the OS is on a separate disk).Now, smartmontools noticed some bad sectors on one of the disks, and I'm not sure what to do next (except for backup of valuable data).I found some articles on how to fix these sectors, but I'm unaware what the result on the whole array will be.

View 4 Replies View Related

Software :: Creating New Mdadm Raid 1 Array?

Mar 2, 2011

a server that was running a hardware isw raid on the system (root) disk. This was working just fine until I started getting sector errors on one of the disks. So, I shutdown the system and removed the failing drive and installed a new drive (same size). On reboot I went in to the intel raid setup and it did show the new drive and I was able to set it to rebuild the raid. So, continuing the reboot everything came up just fine except the raid 1 on the system disk. I have tried many times to get the system to rebuild the raid using dmraid, but to no avail it would not start a rebuild. In order to get the system back up and make sure that the disk was duplicated I was able to 'dd' the working disk to the new disk that was installed.At present when I look at the system it does not show up with a raid setup on the system disk ( this comprises the entire 1TB disk with w partitions sda1 as / and sda2 as swap).Problem:I have decided to forego the intel raid and just use mdadm. I have a test system setup to duplicate (not the software, but the disk partitions) the server setup.

Code:
[root@kilchis etc]# fdisk -l
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

View 12 Replies View Related

Software :: RAID Mdadm Cant Add Disks To Array?

Sep 10, 2010

I have a 7-drive RAID array on my computer. Recently, my SATA PCI card died, and after going through multiple cards to find another one that worked with linux, I now can't assemble the array. The drives are no longer in the order they were in previously, and mdadm can't seem to reassemble the array. It says there are 2 drives and one spare, even though there were 7 drives and no spares. I know for a fact that none of the drives are corrupted, because one of the non-working RAID cards was still able to mount the array for a short period, but would loose the drives during resyncing (I later found out that the chipset on the card was had extremely limited linux support). I have tried running "mdadm --assemble --scan" and after the drive is partially assembled, I add the other drives with "mdadm --add /dev/md0 /dev/sdc1". These both return errors and will not complete on the new raid card.

Code:
aaron-desktop:~ aaron$ sudo mdadm --assemble /dev/md0
mdadm: /dev/md0 assembled from 2 drives and 1 spare - not enough to start the array.

[code]....

View 4 Replies View Related

CentOS 5 :: Installing The SATA RAID Controller?

Mar 19, 2009

I'm working on a new server and it has an Nvidia SATA array controller with 2 250Gb SATA drives configured in a hardware array. When the first screen comes up I'm entering the option Linux DD for it to prompt me for the drivers but nothing ever happens. The screen says that it's loading a SATA driver for about 15 minutes and then the screen clears and has a plus sign cursor on a black screen. What am I doing wrong? The only driver that came with the HP server are for Redhat 4 and 5 and SUSE, will any of those actually work?

View 7 Replies View Related

Debian Hardware :: Sata Errors While Building Mdadm Raid 5?

Oct 13, 2010

I received some errors while running a benchmarking script to determine my ideal raid chunk size. There are several errors in the kernel log regarding the sata link and eventually the two drives i have connected to a pci express x1 sata card were no longer present in /dev/

the script i was using is available here [URL]..system specs 1 500gb western digital drive (system drive)3 2tb samsung f4 drives (2 connected to pci x1 card (sata II) and 1 onto onboard sata port (sata I)) single core amd 64 on SiS chipset debian 64bit testing

[Code]...

I rebooted the machine and everything appears to be happy. What do these errors mean? What steps should i take to prevent them in the future so it doesn't end up corrupting the array?

View 1 Replies View Related

Ubuntu :: Slow Write Speeds On Mdadm RAID 5 Array

Mar 3, 2010

I have a 4 drive RAID 5 array set up using mdadm. The system is stored on a seperate physical disk outside of the array. When reading from the array its fast but when writing to the array its extremely slow, down to 20MB/Sec compared to 125MB/Sec reading. It does a bit then pauses, then writes a bit more and then pauses again and so on.The test i did was to copy a 5GB file from the RAID to another spare non-raid disk on the system average speed 126MB/s. Copying it back on to the RAID (in another folder) the speed was 20MB/s.The other thing is very slow several KB/s write speed copying from eSATA drive to the RAID.

View 9 Replies View Related

Server :: Mdadm Acting Oddly With RAID 5 Array?

Dec 21, 2010

I have been having some odd issues over the last day or so while trying to get a raid 5 array running in software under Kubuntu. I installed 3 1TB drives and started up, my sd* order got all messed up( sda was now sdc and so on). This wasn't entirely unexpected, so I fixed up fstab and booted again. I found all three of the drives I installed, set them to raid auto-detect and used mdadm to create /dev/md0. I then created mdadm.conf by piping the output of mdadm --detail --scan --verbose into /etc/mdadm.conf.At this point, everything was still going swimmingly. I copied over a few hundred GB of data from another failing drive and everything seemed ok. I went to reboot once the copy was done and everything just went weird. All of the sd* drives went back to the original. Of course, this meant that the mdadm.conf was wrong. I tried to just change the device list, but that didn't work. I then deleted mdadm.conf and rebooted. The drive list stayed in the original order this time, so I just tried manually starting the array.

By erasing the partition table of the 3rd drive, I've been able to get it to the status of spare, but it says it is busy when I try to add it to the array. A grep through dmesg makes me think that md has a lock on it. I'm not sure where to go with it now. If anyone has any pointers, I would like to hear them.

Device List(original):
/dev/sda => boot drive, /home /
/dev/sdb => 1.5TB media storage, failing

[code]...

View 1 Replies View Related

Server :: Mdadm Create,, Raid Array Is Not Clean?

Nov 16, 2009

mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1and I getmd1: raid array is not clean -- starting background reconstructionWhy is it not clean?Should I be worried?The HD is not new it has been used in before in a raid array but has beenrepartitionated.

View 2 Replies View Related

Server :: MDADM Raid 5 Array - OS Drive Failure?

Jun 7, 2011

I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it. then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says :Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error

mdadm: Not enough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:

root@warren-P5K-E:~# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90

[code]...

View 11 Replies View Related

Server :: Raid Array Metadata Info (mdadm)?

Feb 3, 2011

When we assemble a raid array, from where does it load configuration information for that array? I thought it refers to /etc/mdadm.conf file, but in my system, mdadm.conf file doesn't even contain all information. Still it is able to successfully assemble previously created device.

# cat /etc/mdadm.conf
DEVICE /dev/sd[bcdjkl]1
DEVICE /dev/loop[012345]

[code]...

View 2 Replies View Related

CentOS 5 :: Software RAID - Starting Array With Mdadm

Jul 15, 2010

I've been having troubles with software raid. In particular, the raid array becomes un "assembleable" after reboots. The config is CentOS 5, 4 sata discs (one by 160 containing OS, no raid and 3 2TB disks configured as a RAID 5 array - no spare drive). These drives were configured in anaconda and all seemed to go well (the drive and its lvm partitions worked and it finished rebuilding overnight). A couple of reboots later the drives cannot be assembled anymore and the machine won't boot. The error message says:

mdadm: /dev/md0 assembled from 1 drive and 1 spare - not enough to start the array.

Of course there are 3 drives and no spares in the array as configured. Manually starting the array with mdadm --assemble --scan gives the same message as does assembling the drive by specifying the individual parts. /proc/mdstat does recognize the 3 drives and when I look at the partition tables in fdisk, they show as being software raid. What could be wrong or steps to diagnose? I tried configuring the raid drives manually before going the anaconda route. Also, does anyone know I can edit the /etc/fstab file to disable them so the machine will at least boot. The (Repair filesystem) shell has the / drive mounted r/o.

View 7 Replies View Related

Hardware :: Stop Or Work Around Onboard SATA Raid Controller

Jun 12, 2010

I bought a used server and it's in great working condition, but I've got a 2-part problem with the onboard raid controller. I don't have or use RAID and want to figure out how to stop or work around the onboard SATA raid controller. First some motherboard specs.: Arima HDAMA 40-CMO120-A800 [URL]... The 4-port integrated Serial ATA Integrated SiliconImage Sil3114 raid controller is the problem.

Problem 1: When I plug in my SATA server hard drive loaded with Slackware 12.2 and linux kernel 2.6.30.4, the onboard raid controller recognizes the one drive and allows the OS to boot. Slackware gets stuck looking for a raid array and stops at this point -........

View 11 Replies View Related

Server :: Creating Backup Disk Image Of RAID 1 Array (MDADM)?

Oct 27, 2010

We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode:
dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.

View 1 Replies View Related

Server :: Raid 1 - Resync The Drives In The Array Hda Primary And Hdc Secondary Using Mdadm?

Nov 30, 2010

I am learning software raid 1 with centos 5.5. I created the raid with out any problems and removed the first drive to check there was no problems and it booted. I have installed the old drive back in the system as hdc and need to resync the drives (used old drive as partitions correct) I thought I could use raidhotadd but id does not seem to exist anymore. how I resync the drives in the array hda primary and hdc secondary using mdadm

View 1 Replies View Related

Server :: Raid 10 Not Assembling Mdadm Assembled From 2 Drives - Not Enough To Start The Array?

Feb 20, 2011

This is message I get when I try and start itmdadm: /dev/md0 assembled from 2 drives - not enough to start the arrayBelow is the information I've collected about any help on how I can get the raid back up and going to I can get the data off of it would be awesome

sudo mdadm --examine --scan -v
ARRAY /dev/md0 level=raid10 num-devices=4 UUID=91c36708:a7cbb532:5b51dc92:ba008491
devices=/dev/sdd1,/dev/sdc1,/dev/sdb1,/dev/sda1

[code]...

View 1 Replies View Related

Software :: RAID 5 Array Not Assembling All 3 Devices On Boot Using MDADM - One Is Degraded

Aug 31, 2010

I have been having this problem for the past couple days and have done my best to solve it, but to no avail. I am using mdadm, which I'm not the most experienced in, to make a raid5 array using three separate disks (dev/sda, dev/sdc, dev/sdd). For some reason not all three drives are being assembled at boot, but I can add the missing array without any problems later, its just that this takes hours to sync. Here is some information:

[Code]....

View 11 Replies View Related

OpenSUSE Install :: Error 11.2 On Raid 0 / EM2001 2 Poorts PCI Controller SATA Fails?

Mar 1, 2010

I want to stop using Windows because it sucks so i have downloaded all kind of distibutions from Linux. They give all the same error because it seems Linux has problems with Fakeraid. Now i have running OpenSuse in VmWare 7.0.1 but i want it as the only OS.

The installation goes fine but in the end it gives a Grub error because it cannot create the bootloader. It seems to be a common problem and i have done all the steps that i could find on Google.

I have two raid controllers. One is integrated in the mainboard from Asrock ALiveNF7G-HD720p R5.0 and OpenSuse sees it as a Jmicron controller.I have bought also a EM2001 2 Poorts PCI Controller SATA card with two harddisks in Raid 0 because Linux failed to install on the JMicron. On the EM2001 2 Poorts PCI Controller SATA it also fails with the same error.

I want OpenSuse 11.2 working on Raid 0. I know it must be some simple commands in the terminal through a live cd to correct the bootloader and do it manualy by Linux users but i'm a Windows user.

Can somewhone please tell me the exact steps and commands to install Linux on Raid 0 Fakeraid?

View 4 Replies View Related

Hardware :: Give Centos Drivers For Embedded SATA Raid Controller On HP Proliant?

Feb 5, 2010

I forced my workplace to forgo windows and opt for linux for web and mail server. I'm setting up Centos 5.4 on it and I ran into a problem. The server machine is a HP Proliant DL120 G5 (quad core processor, 4GB Ram, two SATA drives, 150GB each attached to the hardware RAID Controller on board). RAID is enabled in the BIOS.I pop in the Centos disk and go through the installation process.

When I get to the stage where I partition my hard drive,it is showing one hard drive, not as traditional sda.but as mapper/ddf1_4035305a86a354a45.I looked around and figured that I need to give Centos the raid drivers. I downloaded it from:

[URL]

I follow the instructions and download the aarahci-1.4.17015-1.rhel5.i686.dd.gz file and unzipped it using gunzip. Then on another nix system, i do this:

dd if=aarahci-1.4.17015-1.rhel5.i686.dd of=/dev/sdb bs=1440k Note that I am using a usb floppy drive, hence the sdb. After that, during centos setup, i type: linux updates dd

It asks me where the driver is located. I tell it and the installation continues in the graphical mode. But I still get mapper/ddf1_4035305a86.a354a45 as my drive. I tried to continue to install centos on it. It was successfull but when i do a "df -h" it gives me /dev/mapper/ddf1_4035305a86......a354a45p1 as /boot

/dev/mapper/ddf1_4035305a86......a354a45p2 as /
/dev/mapper/ddf1_4035305a86......a354a45p3 as /var
/dev/mapper/ddf1_4035305a86......a354a45p4 as /external
/dev/mapper/ddf1_4035305a86......a354a45p5 as /swap
/dev/mapper/ddf1_4035305a86......a354a45p6 as /home

Well i know why it's giving these, because i set it up that way, but i was hoping it would somehow change to the normal /dev/sda, /dev/sdb. That means that the driver i provided did not work. I have another IBM server (5U) with raid scsi drive and it shows the usual /dev/sda. It also has hardware raid. So i know that there is something wrong with the /dev/mapper/ddf1_4035305a86......a354a45p1 format.

First, is there any way that I can put the aarahci-1.4.17015-1.rhel5.i686.dd (floppy image) on a CD?. I really need to set this up with raid. I know i could simply disable raid in bios and then i would get two normal hard drives sda and sdb. But it has to be a raid setup. Any way to slipstream the driver into the centos dvd? The hp link i provided above, under installation instructions, there are some instructions titled "Important". But I couldn't get it to work.

View 2 Replies View Related

Ubuntu Servers :: Creation Of RAID-0 Array In Disk Utility Resulting In Smaller Than Expected Array?

Sep 27, 2010

I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).

The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:

160 + 250 + 250+ 750 + 250 +200 + 200 + 250 + 320 + 250 + 320 = 3.2TB

Am I missing something or making a false assumption somewhere?

View 4 Replies View Related

Server :: RAID With RHEL5-AS 64bit On HP DL320 G6 (Smart Array B110i SATA)

Aug 4, 2010

Trying to install RHEL5-AS 64bit onto HP DL320 G6, with RAID being a mirrored array on the Smart Array B110i SATA controller. The RAID is configured in the BIOS and seems fine.

When I install RedHat, I have to use 'linux dd' to use the HP provided driver (http://h20000.www2.hp.com/bizsupport...5&mode=4&idx=0) and that works fine during the installer, the GUI during the install sees the RAID just fine, it sees one volume, calling it the HP Volume. However, when the system boots after the install, the RAID is gone, and it's now seeing two drives, /sda and /sdb:

[Code]...

View 2 Replies View Related

Ubuntu Servers :: Mdadm Incoonsistent Status On Disks In Same Array

Jan 21, 2011

when I start my raid5, only 2 disks of 3 are active on md0. The 3rd disk is inactive on md_d0.When I do mdadm --examine, the two active disks report 2 active, 2 working, 1 failed. the inactive disk resports 3 active, 3 working, 0 failed.

View 2 Replies View Related

Ubuntu Servers :: 15K/sec Rebuild Speed On Mdadm RAID6 Array?

Jul 18, 2011

Code:

$cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

[code]....

1/4 of my drives died after about 3 years of usage. I replaced it with an identical drive and did a mdadm -add to re-add it to the array. I expected this to take quite a long time, but not more than 1 million minutes to complete!

View 5 Replies View Related

Ubuntu Servers :: Mdadm - Corrupt A Large Array Of Files

Sep 1, 2011

I've been using Ubuntu on my fileserver for quite a while now, and I've always really had this problem, but I want to finally address it and get it fixed. At seemingly random points (when my fileserver is under stress - typically while I'm writing lots of data to it), my fileserver will crash. It generally completely crashes, not responding to any further file requests or any of my SSH commands, and must be reset hard (typically by flipping the power switch). After such an occasion, I end up with some corrupted files. It seems to corrupt a large array of files (it's not an isolated issue - for example, it corrupts files that were not being accessed anywhere near the time it crashed, including files that had never been accessed during that period of uptime). The files don't get completely smashed, but they're definitely corrupted (artifacts in images, skips in audio and video files, often complete failure of binary files such as virtual hard drives or disc images).

I'm using Ubuntu Server 11.04, but similar issues to this happened for me in 10.04 LTS (in fact, I upgraded to try to solve them). I'm using mdadm to create an 8-drive raid6 array. The drives are 1.5 TB each, mostly Samsung HD154UI, but with a WD drive in there too (sorry, I can't find the model number at the moment). The hard drives themselves appear to be working fine - SMART reports no issues with any of them, mdadm says they're all up, and I have no reason to believe that the drives are at fault here (although I can conduct further tests if necessary). I've posted about this problem before here and here. In these cases, the issues seemed to be with XFS - in fact, I switched from XFS to ext4 on my RAID array because I simply believed XFS to be unstable. Unfortunately, this issue occurs with ext4 as well, so I'm fairly certain it's an mdadm issue. Here is the output of "cat /proc/mdstat", for those interested:

[Code]....

View 9 Replies View Related

Software :: Rebuild/repair Array(Raid 1) With Only "mdadm" Command Slack12.2?

Mar 18, 2010

I wonder how to attach new sata hard disk to software array where are two disk and one is crashed (this is a mirroring mode=Raid 1).Situation like this:I unpluged crashed disk and I buy the similar one and plug in What Next should I do?

View 4 Replies View Related

Ubuntu Servers :: Use UUIDs To Setup A Raid With Mdadm?

Oct 6, 2010

Can I use UUIDs to setup a raid with mdadm?

View 3 Replies View Related

Ubuntu Servers :: Create A New Mdadm RAID 5 Device /dev/md0 Across Three Disks?

Feb 5, 2011

I am trying to create a new mdadm RAID 5 device /dev/md0 across three disks where such an array previously existed, but whenever I do it never recovers properly and tells me that I have a faulty spare in my array. More-specific details below. I recently installed Ubuntu Server 10.10 on a new box with the intent of using it as a NAS sorta-thing. I have 3 HDDs (2 TB each) and was hoping to use most of the available disk space as a RAID5 mdadm device (which gives me a bit less than 4TB.)

I configured /dev/md0 during OS installation across three partitions on the three disks - /dev/sda5, /dev/sdb5 and /dev/sdc5, which are all identical sizes. The OS, swap partition etc. are all on /dev/sda. Everything worked fine, and I was able to format the device as ext4 and mount it. Good so far.

Then I thought I should simulate a failure before I started keeping important stuff on the RAID array - no point having RAID 5 if it doesn't provide some redundancy that I actually know how to use, right? So I unplugged one of my drives, booted up, and was able to mount the device in a degraded state; test data I had put on there was still fine. Great. My trouble began when I plugged the third drive back in and re-booted. I re-added the removed drive to /dev/md0 and recovery began; things would look something like this:

Code:
user@guybrush:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc5[3] sdb5[1] sda5[0]
3779096448 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]

[Code]...

View 9 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved