Hardware :: Raid1 Mdadm Repair Degraded Array With Used Good Hard Drive?
Jun 27, 2009
I have a used but good harddrive which I'd like to use as a replacement for a removed harddrive in existing raid1 array. mdadm --detail /dev/md00 0 0 -1 removed1 8 17 1 active sync /dev/sdb1I thought I needed to mark the removed drive as failed but I cannot get mdadm set it to "failed". I issue mdadm --manage /dev/md0 --fail /dev/sda1But mdadm response is:mdadm: hot remove failed for /dev/sda1: no such device or addressI thought I must mark the failed drive as "failed" to prevent raid1 from trying to mirror in wrong direction when I install my used-but-good disk. I want to reformat the good used drive first right? I believe I must prevent raid array from automatically try to mirror in the wrong direction.
View 7 Replies
ADVERTISEMENT
Aug 31, 2010
I have been having this problem for the past couple days and have done my best to solve it, but to no avail. I am using mdadm, which I'm not the most experienced in, to make a raid5 array using three separate disks (dev/sda, dev/sdc, dev/sdd). For some reason not all three drives are being assembled at boot, but I can add the missing array without any problems later, its just that this takes hours to sync. Here is some information:
[Code]....
View 11 Replies
View Related
May 12, 2010
I'm looking to recover a RAID1 array hopefully using mdadm. Ive not really used Linux much befor but I'm keen to learn to get my data back. Basically one of the disks in my Maxtor Shared Storage II (2x500GB sata) died and I could do with either rebuilding the array or getting the data off another way.
I have a spare machine I could use for recovery process. It has a spare drive but its only 120Gig, I also have a bigger 320gig disk but thats IDE not SATA. Do I need to purchase another 500GB sata drive or can I use either of my spares? If i do need to buy a new drive could I use a 1TB or 1.5TB or will it have to be 500? Next question is what is that best version of linux to use, I have knoppix 6.2 and Ubuntu (not sure on version) already. I noticed that mdadm isn't installed by default on Ubuntu.
View 1 Replies
View Related
Aug 7, 2011
I'm convinced that mdadm is going to be the death of me. I've wasted numerous hours on this so far without luck.
OpenSuse 11.4 on an old Supermicro box, creating a software RAID1 array across 2 x IDE 500GB disks. Creating /dev/md0 as a 250MB partition across /dev/sda1 and /dev/sdd1 for /boot, another 465GB partition across /dev/sda2 and /dev/sdd2 as an LVM partition to hold volumes for the various other OS filesystems. After the initial installation and configuration there were a series of mishaps with faulty IDE cables that had drives failing to show up at boot. Somehow, /dev/sdd2 got configured to array /dev/md1 as a spare drive. And nothing I've done so far gets it to show up as an active drive.
The obvious step of failing the partition, removing it, then adding (or re-adding) will bring it back as a spare. I've tried roughly a dozen different permutations of those same steps. The latest was to 'dd if=/dev/zero of=/dev/sdd2' to clear the partition. Thought this might be the trick - after the zero, mdadm -E /dev/sdd2 reported 'no superblock' and no md1 configuration.
So 'mdadm --add /dev/md1 /dev/sdd2' and it still comes back as a spare. Here is mdadm -D /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Sat Jul 9 10:26:01 2011
Raid Level : raid1
Array Size : 488119160 (465.51 GiB 499.83 GB)
code....
I can't stop this array, the OS is running from there. I can't easily boot from CD to repair, all IDE ports have disks attached.
Does anyone have an incantation to promote a spare to active?
View 2 Replies
View Related
Mar 18, 2010
I wonder how to attach new sata hard disk to software array where are two disk and one is crashed (this is a mirroring mode=Raid 1).Situation like this:I unpluged crashed disk and I buy the similar one and plug in What Next should I do?
View 4 Replies
View Related
Nov 19, 2010
Like it says in the title, I am thinking it should be this hard to install the RAID1 array in my brand new PC. Here is what is happening. I have two brand new 1TB drives that I am attempting a new, fresh install of 10.10 on (in fact, the entire box is new). I am attempting to use the alternate desktop install so that I can have access to the manual partitioning (which is required to setup RAID 1, correct?).
I tried to use the guide here: [URL]... I followed the steps, but when I got the the very end (after selecting and creating the MDs) I get an error message stating that there is no root file system defined. I went back and checked all the steps and I am sure I followed everything in the guide.
Here are some quirks (not sure if they are bugs or not) In step 5 of the disc partitioning, it says to select the bootable flag and set it to yes (I am assuming). I press enter over that options, the screen flashes really quickly to a progress bar, but then comes back to the options screen and it still says bootable flag is off. No matter how many times I do it is says "off".
Also, and here is the bigger problem I think. - So the guide says to select the free space in each drive and then select Automatically Partition the free space, which I do, and it comes back and looks formatted accordingly - has 975.6 GB ext4 / and 24.6 GB swap swap. No Problem there.
BUT - whenever I do the same thing to the second drive, the partitions on the first seem to disappear. Meaning, it doesn't say free space, and has two partitions listed, but the / and the swap (last items in each row) have moved to the second drive partitions. I am not sure if this is how it is supposed to be since the pictures in the linked guide to not show what it looks like after that. THis is driving me crazy and I have to have it set up in RAID 1 and unsure as to what it is I am missing.
View 1 Replies
View Related
May 14, 2011
I installed a distro based on CentOS 5.5 (FreePBX distro FYI). It used an automated kickstart script to create an md RAID1 array of all the hard drives connected to the machine. Well, I installed from a thumb drive, which the script in interpreted as a hard drive and thus included in the array. So, I ended up with three md arrays (boot, swap, data) that included the thumb drive. Even better, it used the thumb drive for grub boot so I couldn't start up without it. I was able to mark the USB drive as 'failed' and remove from each array, and even change grub around to boot without the usb drive, but now each of the arrays is marked as degraded:
[Code]...
View 1 Replies
View Related
Jun 7, 2011
I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it. then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says :Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error
mdadm: Not enough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:
root@warren-P5K-E:~# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
[code]...
View 11 Replies
View Related
Jan 13, 2010
I'm looking to stock my SuperMicro P8SCi with two 1-2 TB SATA hard discs, for running backups and web hosting. There are reviews of certain disks stating that the low-power disks will get kicked out of the Raid due to their slow response time, and it also appears that there have been quality problems with these newer disks, as if the race to size has lowered their reliability.
Can someone recommend a good brand and specific disks that you've had experience with? I'd rather not need to replace these after putting them in, but I also don't want to pay significantly more for an illusion of quality.
View 2 Replies
View Related
Jul 2, 2010
The motherboard currently installed on my PC has a RAID Utility (Ctrl+I) at the startup that allow creating RAID1. But I already have a system installed with CentOS 5.4. In order to protect my data, I need RAID1. Can I add another Hard Drive now and have the data mirrored and synced onto both hard drives as if it was in RAID1 right from the beginning?
View 2 Replies
View Related
May 6, 2011
I recently upgraded from lenny to squeeze, and my raid array is degraded immediately after boot. Some info on my machine: I've got a built-in SATA II chipset with 4 drives /dev/sda-d that I use for my RAID5 system, and an IDE drive /dev/sde. Before I upgraded to squeeze, the IDE drive was at /dev/hda. I did the usual 2-step upgrade (kernel/udev first, reboot, then everything else). After the first reboot, the IDE drive became /dev/sda and the SATA drives were /dev/sdb-e. I updated mdadm.conf to reflect the new drive naming and added /dev/sde to the array; it rebuilt successfully everything was back online. After the 2nd reboot, the IDE drive became /dev/sde and my SATA drives went back to /dev/sda-b. No biggie; updated mdadm.conf again, rebuilt, and everything works.
Now that everything has been upgraded, the RAID array still becomes degraded upon boot. I can always add /dev/sda back to the array, and it's always rebuilt successfully. Here are some interesting lines from dmesg:It finds all my drives:
[ 2.376202] scsi 0:0:0:0: Direct-Access ATA ST31000340AS AD14 PQ: 0 ANSI: 5
[ 2.376636] scsi 1:0:0:0: Direct-Access ATA ST31000340AS AD14 PQ: 0 ANSI: 5
[ 2.376968] scsi 2:0:0:0: Direct-Access ATA ST31000340AS AD14 PQ: 0 ANSI: 5
[code]....
View 3 Replies
View Related
Jun 11, 2010
so my servers 7 hds in raid 5 all was working well until one of them died. The HD that died sort of works it can read like half a file also freezes on the benchmark test in disk utility. Unfortunate when i take it out on boot it says. The drive for /media_kbt is not ready or present press s to skip or m for manual recovery. I hit s and then go to disk utility. But i can't start or add disks to the array.
Here is me trying to do random stuff
Code:
administrator@3dslice-host:~$ sudo mdadm --stop /dev/md0
[sudo] password for administrator:
mdadm: metadata format 00.90 unknown, ignored.
mdadm: stopped /dev/md0
administrator@3dslice-host:~$ sudo mdadm --add /dev/md0 /dev/sda1
mdadm: metadata format 00.90 unknown, ignored.
[Code]...
View 2 Replies
View Related
Jun 26, 2011
Ubuntu Server 11.04 i386. I've used linux on and off for years but only in small doses, so I'm really just at newbie level. I was running an Openfiler NAS, but decided to give Ubuntu+Webmin a try. And up 'til now I've been happy with progress. I have set up a RAID-6 array using 5 x 1TB SATA drives. I've ensured that the array is in a "clean" state, and now I want to do some failure testing. The problem occurs when I remove one of the drives in the array. I shutdown, remove a drive, then boot up. The array wont start at all, and comes up with this error during boot:
Quote:
the disk drive for /mnt/raidvol1 is not ready yet or not present
Continue to wait; or Press S to skip mounting or M for manual recovery
If I wait, nothing happens. Obviously the RAID array should start in degraded mode, but it fails to mount at all. When I press "M" to go into manual recovery and type "mount -a" I get the response:
Quote:
mount: special device /dev/RAIDVG1/RAIDLV1 does not exist
I have set BOOT_DEGRADED=true in /etc/initramfs-tools/conf.d/mdadm without success. If I reconnect the disconnected drive, the array works fine, and is in a clean state.
View 9 Replies
View Related
Nov 22, 2009
Here's a brief description of my system:
120GB Sata HDD - Primary OS drive
3 x 1.0TB Sata HDD - Raid 5 array
This is on a C2D MSI P35 Platinum board. Anyway, did a fresh install of F12 on the 120GB, which I had problems with - Anaconda refused to see the drive. Fedora Live could see it fine, and it was listed as an 'nvidia_raid_member' - no idea why, but I completely erased the disc under the Live CD and proceeded to install F12.
Once F12 was installed, I loaded up mdadm to re-activate my Raid 5 array, using 'sudo mdadm --assemble --uuidthe uuid) - and it started with only 2 of the 3 drives. My /dev/sdb drive did not activate into the array, due to what mdadm said was a mismatched UUID. Ok, so I erased /dev/sdb, intending to rebuild the array. Erased /dev/sdb, and then attempted 'sudo mdadm --add /dev/md0 /dev/sdb' and I get this error: "mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container" - I can find NO information on this error message.
[Code].....
I don't believe the hard drives are connected in the exact same order they were in before - I disconnected everything in the system and blew it out (it was pretty dusty)
View 1 Replies
View Related
Jul 5, 2010
I also get sent to a Busybox (initramfs) shell with no text editor and don't know how to copy all the error messages and post them here. If there is a way, let me know. I've typed it out in the meantime:
Code:
md0 : inactive sdxxxx
Attempting to start the RAID in degraded mode...
mdadm: CREATE user root not found
mdadm: CREATE group disk not found
[Code].....
This is with a 3 disk RAID5 array. I turned off the system, pulled out a drive, and started it back up. Fresh install, all I've done so far is apt-get update and upgrade.
View 4 Replies
View Related
Mar 31, 2011
I've 2 servers (xen1 and xen2 - their hostnames) with perversion configuration below: Each server have 4 SATA disks, 1 Tb each.
16 Gb ddr3
debian squeeze x64 installed:
root@xen2:~# uname -a
Linux xen2 2.6.32-5-xen-amd64 #1 SMP Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux
Storage configuration: Former 256 Mb + 32 Gb of 2 of 4 disks are used as raid1 devices for /boot and swap respectively. The rest of space, 970 Gb on all 4 sata disks are used as raid10. There is LVM2 installed over that raid10. Volume group is named xenlvm (that servers are expected to use as xen 4.0.1 hosts, but the story is not about Xen troubles). / , /var, /home are located on logical volumes of small size (just found out I got mixed up with lv names and partitions, but that's not the problem, I think):
[Code]...
View 3 Replies
View Related
Aug 11, 2010
intending to set up an all-in-one server, i threw in the ubuntu server 10.04 (amd64) cd. during the text-install, i set up the device-topology below, and it worked.
[Code]....
then i tested my raid by hot-pulling off the sda wire (ouch). worked fine, system still worked, and it also managed rebooting from the left sdb (which of course showed up being sda, lacking the first drive). now i am trying to recover this pre-crash state. adding the first disk (showing up as sdb), i can add it to md0 and let it start syncronizing for 2 hours. but... i can?t boot anymore with the recovered first disk being sda...
at first, booting got stuck in an initrd-prompt after complaining it couldn?t find my sys-logical volume. after a lot of trial and error i don?t even get any complaints, just a black screen which would let me wait for a boot for weeks... so, my system does not boot from my first disk, whether i plug in the second or not. my second disk still boots. my last attempt to get booting fine again has been: zero sda?s first and last gigabyte to kill any ids duplicate sdb?s first cylinder to sda to make it bootable reinitialize sdb?s part.table using command o in fdisk for a new disk-id recreate sda1 partition add sda1 to md0
View 1 Replies
View Related
Mar 11, 2010
I am running Kernel 2.6.18-128.el5 on a 64bit quad core machine with 8GB RAM. Using "mdadm" I setup a RAID1 array between two Western Digital 1.5TB drives. The problem is that the resync is running VERY slow. Here is a current status.
[root@royalflush shared]# cat /proc/mdstat Personalities : [raid1]
md1 : active raid1 hdc5[1] hda5[0]
1304493952 blocks [2/2] [UU]
[=>] resync = 6.2% (81592192/1304493952) finish=4280156.0min speed=4K/sec
unused devices: <none>
View 1 Replies
View Related
Nov 27, 2010
Posted this on the centos forum too, but I might get better attention here. I just moved my centos server to a mdadm raid1 array. I went partially after this guide: [URL].. What I did was to boot up a livecd and made three partitions on both of my empty disks, one for / one for swap and one for /vz (it's an openvz server). Made those partitions into seperate raid1 arrays and then rsync-ed everything from the old disk to the new partitions.
After I had moved everything I did chroot into the new raid array and edited both grub config files and fstab, according to the guide.
[Code]...
I have managed to run the system on the raid1 disks when using super grub2 disk off a cd, but it has it's own grub and can boot any distro, so I can see that the system is working fine, except for grub. I have tried installing grub both from a livecd (ubuntu 64bit) and when booted into the raid1 array, but it gives the same results as stated above.
View 2 Replies
View Related
Jan 11, 2010
I am planning on setting up a 4x1TB RAID5 with mdadm under Ubuntu 9.10. I tried installing mdadm using "sudo apt-get install mdadm", all worked fine except for the following error: Code: Generating array device nodes... /var/lib/dpkg/info/mdadm.postinst: 170: /dev/MAKEDEV: not found failed. The end result is the /dev/md0 device has not been created, as can be seen here:
Code: windsok@beer:~$ mdadm --detail /dev/md0 mdadm: cannot open /dev/md0: No such file or directory After googling, I found the following bug which describes the issue: [URL] However it was reported way back in April 2009, and it does not look like it will be fixed any time soon, so I was wondering if anyone knows a workaround for this bug, to get me up and running?
View 4 Replies
View Related
Jan 28, 2010
I'm trying to set up a RAID1 partition on my Ubuntu 9.10 workstation.On this dual-boot system, Ubuntu is running from a separate drive (/dev/sdc - an SSD that is quite small, which is why I need more disk space). Besides that, there are two traditional 500 GB hard drives, which have Windows 7 installed (I want to keep the Windows installation intact), and about half of the space unallocated. This space is where I want to set up a single, large RAID1 partition for Linux.
(This, to my understanding, would be software RAID, whereas the Windows partitions are on hardware RAID - I hope this isn't a problem... Edit: See Peter's comment. I guess this shouldn't be a problem since I see both drives separately on Linux.)On both disks, /dev/sda and /dev/sdb, I created, using fdisk, identical new partitions of type "Linux raid autodetect" to fill up the unallocated space.
Device Boot Start End Blocks Id System
/dev/sda1 1 10 80293+ de Dell Utility
/dev/sda2 * 11 106 768000 7 HPFS/NTFS
[code]....
But so is "Device or resource busy" when trying to create the RAID array. Quite strange.
Update: Could the device mapper have something to do with this? How do /dev/mapper and dmraid relate to all this mdadm stuff anyway? Both provide software RAID, but.. differently? Sorry for my ignorance here. Under /dev/mapper/ there are some device files that, I think, somehow match the 3 Windows RAID partitions (sd{a,b}1 through sd{a,b}3). I don't know why there are four of these arrays though.
$ ls /dev/mapper/
control isw_dgjjcdcegc_ARRAY1 isw_dgjjcdcegc_ARRAY3
isw_dgjjcdcegc_ARRAY isw_dgjjcdcegc_ARRAY2
View 2 Replies
View Related
Feb 7, 2011
I have a RAID1 array, where mdadm states that one of the disks is "removed." Naturally, I assume one of the drives has failed. The mdadm --detail command tells me that the sda drive has failed. However, further inspection from the mdadm -E /dev/sdb1 command says that sdb1 disk has been removed. I am a bit confused. Can someone clarify which drive is failed? Am I misreading the command outputs?
Code:
sudo fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
[Code]...
View 3 Replies
View Related
Nov 14, 2010
One of the hard drives in my server failed the other day, backups saved the day and downtime was only a few hours, but when setting up the new drive I went ahead and migrated to software RAID, in the hopes it may give me less downtime in the future when a drive fails. It all went rather well, but my main root partition won't finish syncing for some reason.
sda was the original drive with sda4 as /, sda1 as /boot, and sda2 as swap. sdb was the drive that failed and was replaced with the new drive. So I set up sdb with the same partitions of sda, added it to a RAID1 array, copied files from sda, and reboot to md4 as /, md1 as /boot, and md2 as swap. I added the sda partitions to the array, and the sync went off without a hitch on md1 and md2, md4 progresses well, but after a few hours /proc/mdstat just shows this:
Code:
root@d668:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb2[1] sda2[0]
9767424 blocks [2/2] [UU]
[Code]...
View 3 Replies
View Related
Jul 22, 2011
I have SLES10-SP3 running on an Intel SR1600URHS board with 3 hot-swap SATA disks configured using mdadm as Raid1 with hot spare. If I pull one of the active disks, all file i/o will stop for about 2.5 minutes after which it will start again and the raid array will be rebuilt using the spare disk. Is there any way I can reduce this 2.5 minutes of inactivity? I've tried setting /sys/block/sdX/device/timeout and /sys/block/sdX/device/retries to 1 for all disks, but this hasn't made any difference. The output from messages is:
12:11:56: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2 frozen
12:11:56: ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 cdb 0x1e data 0
12:11:56: res 40/00:03:00:00:20/00:00:00:00:00/b0 Emask 0x4 (timeout)
[code]....
View 1 Replies
View Related
Feb 27, 2011
I've faced the problem with server freeze on heavy write.
System
CentOS 5.5 x64_86 with latest updates and kernel (2.6.18-194.32.1). Also tried 2.6.18-194.26.1 and 2.6.37-2 from ELRepo with the same results.
CPU: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
Memory: 3 x 2Gb DDR3.
HDDs: 2 x Western Digital WDC WD1002FBYS-02A6B0
[Code].....
View 19 Replies
View Related
Dec 29, 2010
I have just upgraded to Fedora 14 from an older version. I now have problems mounting my RAID1 array, which was operating correctly until now. This is a software RAID which was initially built under Fedora 10.The array is md0, and is made of 2 SATA drives (sdc and sdd) which have only one partition. The underlying filesystem is NTFS. The array is assembled correctly and active, as reported by /proc/mdstat and mdadm -D.When I try to mount the array, I get this:
Code:
[root@Goofy ~]# mount /dev/md0 /mnt/raid
mount: you must specify the filesystem type
[code].....
View 4 Replies
View Related
Sep 8, 2010
I did a search and couldn't find anything pertaining to this - if I've missed something please direct me in the right direction We have an Ubuntu box set up as a headless office server (latest desktop install of Ubuntu) and we recently set up two 1TB HDDs in a RAID1 array using mdadm - as far as I can tell it worked successfully and created /dev/md0 with an ext3 file system. After sharing the drive I can see it from the other office computers and could transfer data to and from the RAID array just fine.
I didn't figure out how to get it to automatically mount on boot so I restarted it to see if it would do so by default - however, when I restarted I couldn't see the RAID array any longer on the desktop and it came up as a 0.0kb RAID array in Disk Utility, saying it was broken. It wouldn't let me check it until I stopped and restarted the array.
After restarting I hit "check array" and it appears to be repairing the drives. What have I missed? What happened here? How can I fix it? What other info can I provide to assist:
sudo blkid shows:
/dev/sda1: UUID: "<snip>" TYPE="ext3" (system HDD)
/dev/sda5: UUID: "<snip>" TYPE="swap"
/dev/sdc1: UUID: "<snip>" TYPE="linux_raid_member"
/dev/sdb1: UUID: "<snip>" TYPE="linux_raid_member"
/dev/md0: UUID: "<snip>" SEC_TYPE="ext2" TYPE="ext3"
disk utility: RAID Array: Mirror (RAID-1), Metadata version 0.90.0, Partitioning: Not Partitioned, Components: 2...
View 1 Replies
View Related
Mar 13, 2011
One of my HDD on a 2 drive RAID1 md (linux software raid) is intermittently clicking. When this occurs, I can hear a loud clicking noise at uniform intervals and then it stops. Its like "click..click...click..click...click..click...click..click". You know what I mean
It does it about 4 times per hour. I believe this drive is about to die.
Until I find a replacement drive, can I run into problems with the data on the array? I believe the mdadm utility would tell me if the drive was faulty and once I replace the drive, it would auto rebuild the array (re=copy the data to the new drive)?
I have over 1.2TB of data on this array I really dont want to lose everything...
View 5 Replies
View Related
Aug 19, 2010
I have two HDs (let's say sda and sdb). Both are the same size and have the same partitions already (sda1/sda2/sda3 and sdb1/sdb2/sdb3). Basically they are ready to make a RAID1 array.
Writing with new udev rules, I could create and give fix HD labels with /sbin/scsi_id.
Example: For sdb1 I have a fix device name created under /dev as hd2_boot1, for sdb2 I have /dev/hd2_boot2 and finally for sdb3 I have created the device /dev/hd2_boot3.
With using the command "mdadm --create /dev/md0 --level=1 ....", I could create a RAID array.
But, when I check the status one of the RAID devices, like with the command "mdadm --detail /dev/md2", it still shows me as part of the RAID array the sdb* devices, not the hd2_boot* devices. Something like this:
I would like to see basically as member or the RAID array always the /dev/hd2_boot3 not the /dev/sdb3 (like above), is this possible?
Bottom line, I would like to keep the order of the RAID arrays depending their scsi ids, not depending their scsi numberings which is given by the kernel, since the scsi numberings (sda, sdb, sdc and etc.) can change depending the physical connection.
View 2 Replies
View Related
Jan 2, 2011
I created a raid1 disk with Disk Utilities on with Karmic, then upgraded my system to Lucid then Maverick. This Raid disk is just a data store, I'm not booting off of it. when I reboot the raid1 disk does not start. I have to go into Disk Utilities, stop the array and then start it. Then it comes up and I can mount it. I ran dpkg-reconfigure mdadm and it created a valid entry in mdadm.conf. but the array still does not start on boot. I want to have it auto mount with fstab but need to make sure the array starts first.
View 12 Replies
View Related