General :: Raid - Recover Software RAID5 Array After Server Upgrade?
Jun 7, 2011
I recently upgraded a server from Fedora 6 to Fedora 14. In addition to the main hard drive where the OS is installed, I have 3 1TB hard drives configured for RAID5 (via software). After the upgrade, I noticed one of the hard drives had been removed from the raid array. I tried to add it back with mdadm --add, but it just put it in as a spare. I figured I'd get back to it later.Then, when performing a reboot, the system could not mount the raid array at all. I removed it from the fstab so I could boot the system, and now I'm trying to get the raid array back up.
I ran the following:mdadm --create /dev/md0 --assume-clean --level=5 --chunk=64 --raid-devices=3 missing /dev/sdc1 /dev/sdd1I know my chunk size is 64k, and "missing" is for the drive that got kicked out of the array (/dev/sdb1).That seemed to work, and mdadm reports that the array is running "clean, degraded" with the missing drive.However, I can't mount the raid array. When I try:mount -t ext3 /dev/md0 /mnt/fooI get:
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
[code]....
View 1 Replies
ADVERTISEMENT
Jul 5, 2011
I have an Acer Altos EasyStore SATA NAS box that hung, the only way to reboot was to crash the system (unplug it). Upon reboot it was not recognising the hard drives (it wanted to do a destructive reinitialize). Most of the importent data was backed up, however some was overlooked and we'd quite like to get it back. Removing the disks and placing them in a PC with enough SATA bays to cope, and booting with a live linux distribution (System Rescue CD) I can see the 4 drives are not suffering hardware error and that the original partions exist. Using mdadm I can assemble the Arrays without error (seems to be three but the only one I am concerned with is the RAID5 array of about 3TB). /dev/m1p2 mounts as a loopdevice once an offset is entered. In turn this mounts as an XFS parition. However despite df showing the partition almost to be full. ls -l or ls -a on the mount point shows it to be empty!
I got thusfar using a translation from a German language forum, unfortunately I only speak a little German, and the only other English language post on a simlilar matter I found within that site had no replies. The next step was to unmount loop, then run xfs_chack and xfs_repair on the file system. xfs_check returns that there is are a few dir size and offset errors along with link count mismatches. This I would presume normal for a file system that has become slightly corrupted. xfs_repair (version 3.0.3) gets as far as Phase 3 it finds and corrects zerolength entries, offsets on directories and bogus inode numbers. However the final two lines are:
Code:
realloc failed in blkent_append (2671166480 bytes)
zsh: segmentation fault xfs_repair /dev/loop1
A search on the error missing out data size just returns code to generate it, is anybody able to explain what it means? Also remounting hard drive, ls and varients of still do not return anything. Am I missing some thing (root I am logged in with now would have different credentials presumably to root on the NAS box, so how do I get around this)?
View 5 Replies
View Related
Jan 27, 2010
I had a raid 5 + lvm 2 array and lost a disk. While it was recovering the array, the power was down and recovery stopped. When I recovered the power and start the machine the array was unable to start, it was degraded and the states were different between disks. Every disk watched the array in a different way. I put you the states:
Quote:
/dev/sdd1
Number Major Minor RaidDevice State
this 0 8 33 0 active sync /dev/sdc1
[code]....
The first part in /dev/sdc1 is the same for all the devices, I just post you the states. Another thing is tha all the devices say that there is no superblock It seems that 3 disks are "active sync" but the states of the others doesn't match between them. And /dev/sdd1 is spare, the disk I added manually at first to start the recovery process.
View 3 Replies
View Related
Dec 23, 2010
I have a RAID 5 array, md0, with three full-disk (non-partitioned) members, sdb, sdc, and sdd. My computer will hang during the AHCI BIOS if AHCI is enabled instead of IDE, if these drives are plugged in. I believe it may be because I'm using the whole disk, and the AHCI BIOS expects an MBR to be on the drive (I don't know why it would care).
Is there a way to convert the array to use members sdb1, sdc1 and sdd1, partitioned MBR with 0xFD RAID partitions?
View 1 Replies
View Related
Jun 7, 2010
I'm a bit sick to my stomach right now. I had a raid5 array (5x1.5TB drives) and I upgraded to lucid and now the array no longer works. Initially, on boot, it would try to mount it from fstab and that failed consistently as it wasn't assembling it.
then I tried to assemble it by hand (--scan) and that seemed to cause it to mount degraded (it seems md in the process tried to use on of the disks for something else!), but when I look at its partition table, it's empty. pretty pissed at the moment (somewhat at myself, didn't really need to upgrade), any ideas what went wrong?
View 2 Replies
View Related
May 20, 2011
My box has a raid5 array (mdadm) with everything in it (/boot and /) but swap that is actually spread across the 4 drives. I had ubuntu 10.10 installed (amd64) with grub1, when I upgraded to natty (11.04) it automatically installed grub2. Well boot fails, it always goes to grub rescue no matter what happens. I've installed and reinstalled grub2, and boot always fails with:
"error: file not found".
In grub rescue I can see that md0 is actually available, an "ls" to (md0)/boot succeeds but the strange thing is that an "ls" to (md0)/boot/grub prints nonsense, as does an "ls" to (md0)/boot/usr/lib/grub/i386-pc/. When I try to load the required modules for boot (linux raid etc) in grub I also always get a "file not found error" (I fsck'd md0, which says everything's fine). I have installed the latest version of grub2 and executed grub-install in all four drives.
View 6 Replies
View Related
Oct 29, 2010
I've had software RAID 5 arrays for a while now, so they were set up before a RAID array could be partitioned. I had two separate RAID 5 arrays on the same set of drives. One was for / and the other for /home. I moved the / to an SSD and figured I'd expand the other RAID array by failing a drive, repartitioning it then adding it back in. After repeating for the remaining drives, I could then expand the RAID array to use the full size of the drives.
Partway through the second drive being added back in, the RAID array stopped with a kernel error. The drive I was adding and another drive both showed as failed. I couldn't restart the array so I copied the failed drive (Seagate's SeaTools did show it as faulty, but without SMART being tripped) to a new one and tried again. dd_rescue reported the drive copied correctly but I still couldn't restart the array.
So I tried the old standby of recreating the array. This allows me to start it but the ext3 file system won't mount. So I then tried my script (listed in another thread) to try every combination of drives to assemble the array and mount the file system. Still no luck.
View 2 Replies
View Related
Jul 17, 2009
I have a 9x320G RAID5 array that I am migrating over to a 3x1.5T RAID5 array.Intermittently, a drive would drop out of the older array and it would automatically start rebuilding. I thought it was a bad cable or controller somewhere, so when I bought the three new drives, I bought a new controller for them all, too. I'm running both arrays side by side until I'm happy the new hardware is stable (one drive was DOA). Then I noticed one morning that both arrays were rebuilding themselves. This was in /var/log/messages:
Quote: Jul 5 00:30:19 mnemosyne -- MARK --
Jul 5 00:50:19 mnemosyne -- MARK --
Jul 5 01:06:02 mnemosyne kernel: md: syncing RAID array md0
[code]....
View 4 Replies
View Related
Jul 11, 2010
After upgrading my ubuntu install my raid array is gone. The drives appear in blkid as "Linux raid member" and both have the same uuid. If I try to mount the drive via fstab I get a message that the drive is not ready or present. If I try to mount each of the two drives, one mounts successfully the other reports serious errors. Issuing a cat /proc/mdstat shows md_d0 as inactive.How can I re-establish my raid array? I have the data backed up so if I have to wipe out the disks to start over that's an option.
View 2 Replies
View Related
Aug 10, 2010
I need to restore a superblock on a RAID5 software array. But I'm not sure if I'm meant to restore it from MD0 or a device such as SDA1? From what I read, superblocks are stored on each drive, but I'm not sure if this is changed when a software raid is in use.
View 1 Replies
View Related
Aug 8, 2009
I couldn't post in General. It said I had insufficient permissions to post there, so, this post does have to do with Windows slightly. Sorry that it's here, but I DID read the rules (I searched, and couldn't find an answer to my problem either)
Anyways, I have a RAID5 array 2.72TB (4x1TB drives) which I used in my windows installation, initialized as GPT, and I used "span" to make the single 2TB partition, and 720GB partition into one partition. I believe that Windows created a software RAID0. Ok, so now I've made the leap away from windows, and am going 100% into Linux (Debian, to be exact) and I'm trying to figure out how to mount this array. I've only done basic web/ftp/ircd server management on Linux before, and never anything with mounting drives. I'm a complete n00b at this stuff.
View 9 Replies
View Related
Dec 10, 2010
I have acquired a dell 2850 poweredge server and installed ubuntu server onto only to find out we cant use linux for its intended use and need to uninstall remove ubuntu and it has a raid 5 array on the server.
View 1 Replies
View Related
May 12, 2010
I'm looking to recover a RAID1 array hopefully using mdadm. Ive not really used Linux much befor but I'm keen to learn to get my data back. Basically one of the disks in my Maxtor Shared Storage II (2x500GB sata) died and I could do with either rebuilding the array or getting the data off another way.
I have a spare machine I could use for recovery process. It has a spare drive but its only 120Gig, I also have a bigger 320gig disk but thats IDE not SATA. Do I need to purchase another 500GB sata drive or can I use either of my spares? If i do need to buy a new drive could I use a 1TB or 1.5TB or will it have to be 500? Next question is what is that best version of linux to use, I have knoppix 6.2 and Ubuntu (not sure on version) already. I noticed that mdadm isn't installed by default on Ubuntu.
View 1 Replies
View Related
Apr 8, 2010
Is there a way for me to mount a raid array member directly without using any of the raid tools? For instance, I have a raid 1 array that contains /dev/sda1 and /dev/sdb1. How can I mount /dev/sda1 or /dev/sdb1 directly? Doing mount /dev/sda1 <mnt point> does not work. If I try specifying the filesystem type with -t this doesn't work either.
View 1 Replies
View Related
Dec 21, 2010
I have been having some odd issues over the last day or so while trying to get a raid 5 array running in software under Kubuntu. I installed 3 1TB drives and started up, my sd* order got all messed up( sda was now sdc and so on). This wasn't entirely unexpected, so I fixed up fstab and booted again. I found all three of the drives I installed, set them to raid auto-detect and used mdadm to create /dev/md0. I then created mdadm.conf by piping the output of mdadm --detail --scan --verbose into /etc/mdadm.conf.At this point, everything was still going swimmingly. I copied over a few hundred GB of data from another failing drive and everything seemed ok. I went to reboot once the copy was done and everything just went weird. All of the sd* drives went back to the original. Of course, this meant that the mdadm.conf was wrong. I tried to just change the device list, but that didn't work. I then deleted mdadm.conf and rebooted. The drive list stayed in the original order this time, so I just tried manually starting the array.
By erasing the partition table of the 3rd drive, I've been able to get it to the status of spare, but it says it is busy when I try to add it to the array. A grep through dmesg makes me think that md has a lock on it. I'm not sure where to go with it now. If anyone has any pointers, I would like to hear them.
Device List(original):
/dev/sda => boot drive, /home /
/dev/sdb => 1.5TB media storage, failing
[code]...
View 1 Replies
View Related
Nov 16, 2009
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1and I getmd1: raid array is not clean -- starting background reconstructionWhy is it not clean?Should I be worried?The HD is not new it has been used in before in a raid array but has beenrepartitionated.
View 2 Replies
View Related
Jun 7, 2011
I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it. then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says :Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error
mdadm: Not enough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:
root@warren-P5K-E:~# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
[code]...
View 11 Replies
View Related
Mar 25, 2011
I have been running a server with an increasingly large md array and always been plagued with intermittent disk faults. For a long time, I've attributed those to either temperature or power glitches. I had just embarked on a quest to a) lower case and drive temperature. They were running between 43 and 47C, sometimes peaking at 52C, so I've added more case fan power and made sure the drive cage was in the flow (it has it's own fan, too). Also, I've upgraded my power supply and made very sure that all the connectors are good. The array currently is a RAID6 with 5 Seagate 1,5TB drives.
When everything seemed to be working fine, I looked at my SMART logs and found that two of my drives (both well over 14000 operating hours) were showing uncorrectible bad blocks. Since it's RAID6, I figured, I couldn't do much harm, ran a badblocks test on it, zeroed the blocks that were reported bad, figuring the drive defect management would remap them to a good part of the disk and zeroed the superblock. I then added it back to the pack and the resync started. At around 50%, a second drive decided to go and shortly thereafter a third. Now, with two out of five drives, RAID6 will fail. Fine. At least, no data will be written to it anymore, however, now I cannot reassemble the array anymore.
Whenever I try I get this:
Code:
mdadm --assemble --scan
mdadm: /dev/md1 assembled from 2 drives and 2 spares - not enough to start the array
Code:
cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [linear]
md1 : inactive sdf1[4](S) sde1[6](S) sdg1[1](S) sdh1[5](S) sdd1[2](S)
7325679320 blocks super 1.0
md0 : active raid1 sdb2[0] sdc2[1]
312464128 blocks [2/2] [UU]
bitmap: 3/149 pages [12KB], 1024KB chunk
Which is not fine. I'm sure that three devices are fine (normally, a failed device would just rejoin the array, skipping most of the resync by way of the bitmap) so I should be able to reassemble the array with the two good ones and the one that failed last, then add the one that failed during the resync and finally re-add the original offender. However, I have no idea how to get them out of the "(S)" state.
Code:
mdadm --examine /dev/sdd1
/dev/sdd1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : d79d81cc:fff69625:5fb4ab4c:46d45217 .....
View 2 Replies
View Related
Feb 3, 2011
When we assemble a raid array, from where does it load configuration information for that array? I thought it refers to /etc/mdadm.conf file, but in my system, mdadm.conf file doesn't even contain all information. Still it is able to successfully assemble previously created device.
# cat /etc/mdadm.conf
DEVICE /dev/sd[bcdjkl]1
DEVICE /dev/loop[012345]
[code]...
View 2 Replies
View Related
Dec 11, 2010
I rebuilt a server and am now trying to recover my large data arrays. The server was ubuntu 10.04lts before. I decided to rebuild it with CentOS simply because I am more familiar with it. I had 2 raid-5 arrays on the old server:4 x 1tb -> md0 5 x 2tb -> md1 The newly built server does not know about these arrays yet. How can I reassemble the arrays without loosing my data? I know the data can still be accessed because booting the server with a live-cd mounts and shows the arrays just fine. Should I boot with a live cd and copy the mdadm config file?
View 5 Replies
View Related
Aug 20, 2009
In a nutshell, our RAID 1 array was rendered broken and we were advised that core lib files were missing and the OS needed to be reloaded... a quote from our server host:"The OS is not healthy.This server will need a reinstall.
Libs are missing." This was after having replaced what we though was a faulty /dev/sdb. So they reloaded the OS (Debian 5.0.2 x86_64) on 2 FRESH drives, and installed the old /dev/sda as /dev/sdc once the reload was completed. Here's the output of /etc/fstab on the fresh install so we know what we're working with:
Code:
debian:/BAK# cat /etc/fstab
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
[code]....
The one problem I see myself running into is /dev/md1 and /dev/md2 are currently in use by the new system, so I cannot mount it there. I should also note, reloading the OS is a viable option if needed as we haven't started configuring the server yet. So if we need to reinstall the OS and assign the NEW RAID arrays to something other than /dev/md1 and /dev/md2 then we can do that.
View 3 Replies
View Related
Sep 27, 2010
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
160 + 250 + 250+ 750 + 250 +200 + 200 + 250 + 320 + 250 + 320 = 3.2TB
Am I missing something or making a false assumption somewhere?
View 4 Replies
View Related
Jun 23, 2009
I need to mount my raid array on CentOS 5.2 samba server.
Here are my hardware specs:
Motherboard: Tyan S2510 LE dual PIII
CPU's: Intel PIII 850ghz socket 370
Memory: 4 gig Crucial 133 ECC SDRAM
OS: 2 x'x IBM Travelstar 6.4 gig 2.5 hard drives, (low heat/noise)
Storage: 4 x's Seagate 500 gig IDE 7200 rpm
RAID controller: 3Ware 7500-12 controller, (RAID 5) (66 mhz PCI bus)
NIC: 3COM 3C996B-T gigabit NIC, (66 mhz PCI bus)
I have the 2 IBM's set as RAID 1, (mirror) and the 4 Seagates as RAID 5, (1.5 TB) I have installed the OS with minor problems, (motherboard doesn't like the 2.6.18-128.1.14.el5 kernel, removed it from my grub.conf).
My problem is mounting the RAID array. I have done the following:
formatted with fdisk;
fdisk /dev/sdb
Then formatted with the following command;
mkfs.ext3 -m 0 /dev/sdb
The hard drive was formatted with the ext3 files system, but I have mounted it as an ext2 file system as I don't want 'journaling' to occur. I then edited my /ect/fstab like this: .....
Then: mount -a
When I go into my "home" directory and type ls, I get the following:
[root@hydra home]# ls -l
total 24
drwx------ 2 zog zog 4096 Jun 23 15:50 zog
lrwxrwxrwx 1 root root 6 Jun 23 15:46 home -> /home/
drwxrwxrwx 2 root root 16384 Jun 23 15:34 lost+found
drwxr-xr-x 2 root root 4096 Jun 23 17:18 tmp
Why my home directory is showing under home?
View 5 Replies
View Related
Jun 10, 2011
I am trying to build a new array after adjusting TLER on my disks, which permanently changed some of the drives sizes. I am not sure if the following inconsistencies are related to the newly mismatched drive sizes.
Using:
Code:
mdadm --create --auto=md --verbose --chunk=64 --level=5 --raid-devices=4 /dev/md1 /dev/sdd /dev/sde /dev/sdf /dev/sdg
Nets me (build-time was two full days):
[Code]....
On a side note, since I'm recreating my array from scratch, I was wondering if anyone here knows of any optimized settings I could use. I've got 3Tb of data to transfer, so lots of test material.
These are Western Digital First Generation 2TB Green Drives (WD20EADS-00R6B0) with WDidle3 fix applied & TLER=ON. These are pre Advanced Format (aka not 4K).
Code:
mkfs.ext4 -E stripe-width=48,stride=16 /dev/md1
View 9 Replies
View Related
Oct 27, 2010
We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode:
dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.
View 1 Replies
View Related
Nov 30, 2010
I am learning software raid 1 with centos 5.5. I created the raid with out any problems and removed the first drive to check there was no problems and it booted. I have installed the old drive back in the system as hdc and need to resync the drives (used old drive as partitions correct) I thought I could use raidhotadd but id does not seem to exist anymore. how I resync the drives in the array hda primary and hdc secondary using mdadm
View 1 Replies
View Related
Feb 20, 2011
This is message I get when I try and start itmdadm: /dev/md0 assembled from 2 drives - not enough to start the arrayBelow is the information I've collected about any help on how I can get the raid back up and going to I can get the data off of it would be awesome
sudo mdadm --examine --scan -v
ARRAY /dev/md0 level=raid10 num-devices=4 UUID=91c36708:a7cbb532:5b51dc92:ba008491
devices=/dev/sdd1,/dev/sdc1,/dev/sdb1,/dev/sda1
[code]...
View 1 Replies
View Related
Feb 22, 2011
I have a huge problem with my file server (OpenSuse 11.3 - 64bit, kernel-2.6.34.7-0.7-default). I've just installed an Intel SASUC8I card, connected 3 of the 7 Samsung 2TB drives I have to it and after about one hour, it dropped 2 of the disks. I've managed to trace the problem to the card BIOS, which I've replaced with the non-raid edition, so it should now work fine with the kernel raid now. The problem is that I can't find a way to "un-fail" these 2 disks. I'm more than positive, that these drives are just fine, only the controller was misbehaving. The dropout also couldn't have created any data inconsistency either, since the 2 drives dropped out virtually at the same time and there was no writing being done at the time. I've tried add/re-add, I get either mdadm: cannot get array info for /dev/md0 or mdadm: add new device failed for /dev/sdi1 as 7:
Invalid argument (depending on the raid being run or being stopped, in either case, mdstat reports it to be inactive)
For a normal or forced assemble, I get mdadm: /dev/md0 assembled from 5 drives and 1 spare - not enough to start the array.I've been googleing like crazy, also trying to get info from mdadm's help and man, but nothing seems to deal with such a freak accident. An other interesting thing is, that if I reboot the system, mdstat shows md0 as inactive, but lists all the devices with no flags. It's only after a run command, that it changes to the 5 remaining devices, all with (S) flags. Alternatively: does anyone know where device failure info is stored? If I could in some way remove this information from the system (even by reinstalling the OS), I should be able to reassemble the array... Or is it stored in the member drive super-blocks? About 80% of this array's data is backed up, so if all else fails, I can restore most of its content, but I'd much prefer to reassemble this one as a whole, since there was absolutely no chance of data corruption.
View 1 Replies
View Related
Jun 15, 2011
I'm a bit at a loss on this one. I couldn't get a drive from a former RAID5 array to format. I did a dd to write zero's to the drive and attempted to fsck only to be stopped every time with the error: Couldn't find ext2 superblock, trying backup blocks.. fsck.ext3: Bad magic number in super-block while trying to open /dev/sda1
Smartctl shows no problems with the drive (a Seagate 750GB), but I haven't removed it and thrown it in a windows machine to do seagates proprietary drive diagnostics yet. Running Centos5.6 .I've never had this problem before. The drive is not mounted and the old md device has been removed as far as I can tell. It could still be attempting to assemble the RAID5 with the 1 drive, but I didn't see it attempt to do so.
[Code]...
View 3 Replies
View Related
Mar 19, 2011
Yesterday I created a raid5 array /dev/md0 consisting of 5 harddisks, named sda thru sde on the time of creation.After that I stored some data into the arry without any difficulties, then shutdown the computer.Early this morning when starting the computer I got a message that /dev/md0 was not ready to be mounted.So I checked the raid array and discovered that the enumerator had been messing with the harddisks.
Harddisk sda was now sdc etc. etc.After I rebooted, the harddisks got the original names again: sda was sda again.When I mounted the array no problems occurred.So, it seems that the order in which the harddisks are enumerated influences the availability of the raid array. Is there a way to avoid this kind of problems with a raid array?
View 4 Replies
View Related