OpenSUSE Install :: Mdadm - Change Spare To Active In RAID1 Array?

Aug 7, 2011

I'm convinced that mdadm is going to be the death of me. I've wasted numerous hours on this so far without luck.

OpenSuse 11.4 on an old Supermicro box, creating a software RAID1 array across 2 x IDE 500GB disks. Creating /dev/md0 as a 250MB partition across /dev/sda1 and /dev/sdd1 for /boot, another 465GB partition across /dev/sda2 and /dev/sdd2 as an LVM partition to hold volumes for the various other OS filesystems. After the initial installation and configuration there were a series of mishaps with faulty IDE cables that had drives failing to show up at boot. Somehow, /dev/sdd2 got configured to array /dev/md1 as a spare drive. And nothing I've done so far gets it to show up as an active drive.

The obvious step of failing the partition, removing it, then adding (or re-adding) will bring it back as a spare. I've tried roughly a dozen different permutations of those same steps. The latest was to 'dd if=/dev/zero of=/dev/sdd2' to clear the partition. Thought this might be the trick - after the zero, mdadm -E /dev/sdd2 reported 'no superblock' and no md1 configuration.

So 'mdadm --add /dev/md1 /dev/sdd2' and it still comes back as a spare. Here is mdadm -D /dev/md1

/dev/md1:
Version : 1.0
Creation Time : Sat Jul 9 10:26:01 2011
Raid Level : raid1
Array Size : 488119160 (465.51 GiB 499.83 GB)
code....

I can't stop this array, the OS is running from there. I can't easily boot from CD to repair, all IDE ports have disks attached.

Does anyone have an incantation to promote a spare to active?

View 2 Replies


ADVERTISEMENT

General :: Mdadm - RAID5 To RAID6, Spare Won't Become Active

Jun 21, 2011

I've been playing with this for hours, and have been unable to figure it out. I tried to convert my RAID5 array of 4 active disks and 1 spare to a RAID6 with 5 active disks.

I did this:

Code:
mdadm --grow /dev/md4 --raid-devices 5 --level 6
Here is what I have on /dev/md4:

Code:
/dev/sde1 active
/dev/sdg1 active
/dev/sdj1 active
/dev/sdf1 active
removed
/dev/sdh5 spare
code....

but it tells me that /dev/sde is busy, and then that it has a bad superblock (From what I've read, I'm sure the bad superblock is just because of the "busy" message). I've tried this with the -f option, too, with no luck.

View 7 Replies View Related

General :: Recover A RAID1 Array Using Mdadm

May 12, 2010

I'm looking to recover a RAID1 array hopefully using mdadm. Ive not really used Linux much befor but I'm keen to learn to get my data back. Basically one of the disks in my Maxtor Shared Storage II (2x500GB sata) died and I could do with either rebuilding the array or getting the data off another way.

I have a spare machine I could use for recovery process. It has a spare drive but its only 120Gig, I also have a bigger 320gig disk but thats IDE not SATA. Do I need to purchase another 500GB sata drive or can I use either of my spares? If i do need to buy a new drive could I use a 1TB or 1.5TB or will it have to be 500? Next question is what is that best version of linux to use, I have knoppix 6.2 and Ubuntu (not sure on version) already. I noticed that mdadm isn't installed by default on Ubuntu.

View 1 Replies View Related

Hardware :: Raid1 Mdadm Repair Degraded Array With Used Good Hard Drive?

Jun 27, 2009

I have a used but good harddrive which I'd like to use as a replacement for a removed harddrive in existing raid1 array. mdadm --detail /dev/md00 0 0 -1 removed1 8 17 1 active sync /dev/sdb1I thought I needed to mark the removed drive as failed but I cannot get mdadm set it to "failed". I issue mdadm --manage /dev/md0 --fail /dev/sda1But mdadm response is:mdadm: hot remove failed for /dev/sda1: no such device or addressI thought I must mark the failed drive as "failed" to prevent raid1 from trying to mirror in wrong direction when I install my used-but-good disk. I want to reformat the good used drive first right? I believe I must prevent raid array from automatically try to mirror in the wrong direction.

View 7 Replies View Related

Ubuntu Installation :: Mdadm Using "sudo Apt-get Install Mdadm" - Error "Generating Array Device Nodes"

Jan 11, 2010

I am planning on setting up a 4x1TB RAID5 with mdadm under Ubuntu 9.10. I tried installing mdadm using "sudo apt-get install mdadm", all worked fine except for the following error: Code: Generating array device nodes... /var/lib/dpkg/info/mdadm.postinst: 170: /dev/MAKEDEV: not found failed. The end result is the /dev/md0 device has not been created, as can be seen here:


Code: windsok@beer:~$ mdadm --detail /dev/md0 mdadm: cannot open /dev/md0: No such file or directory After googling, I found the following bug which describes the issue: [URL] However it was reported way back in April 2009, and it does not look like it will be fixed any time soon, so I was wondering if anyone knows a workaround for this bug, to get me up and running?

View 4 Replies View Related

Ubuntu :: Mdadm Array Gone After Failed Install?

Mar 30, 2010

I tryed to install ubuntu 10.04 using the beta alternative install cd.

Everything went fine until the partitioning section.

I choose manual partitioning and all my existing partitions were detected correctly included my 2 mdadm raid0 arrays.

I choose md0 as my / partition and choose to format the partition

I choose md1 as my /home partition as choose to keep the data

When I choose to continue and write the changes to disk the install started to create an ext4 partition on md0, the installer then stopped with an error that the kernel could not reread the partition table.

I aborted the installation at this point.

Now I can not access either of my arrays.

I have booted a livecd and installed mdadm. When I checked /etc/mdadm/mdadm.conf my existing arrays were already listed.

Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

[Code].....

View 3 Replies View Related

Ubuntu Installation :: Hard To Install The RAID1 Array In Brand New PC?

Nov 19, 2010

Like it says in the title, I am thinking it should be this hard to install the RAID1 array in my brand new PC. Here is what is happening. I have two brand new 1TB drives that I am attempting a new, fresh install of 10.10 on (in fact, the entire box is new). I am attempting to use the alternate desktop install so that I can have access to the manual partitioning (which is required to setup RAID 1, correct?).

I tried to use the guide here: [URL]... I followed the steps, but when I got the the very end (after selecting and creating the MDs) I get an error message stating that there is no root file system defined. I went back and checked all the steps and I am sure I followed everything in the guide.

Here are some quirks (not sure if they are bugs or not) In step 5 of the disc partitioning, it says to select the bootable flag and set it to yes (I am assuming). I press enter over that options, the screen flashes really quickly to a progress bar, but then comes back to the options screen and it still says bootable flag is off. No matter how many times I do it is says "off".

Also, and here is the bigger problem I think. - So the guide says to select the free space in each drive and then select Automatically Partition the free space, which I do, and it comes back and looks formatted accordingly - has 975.6 GB ext4 / and 24.6 GB swap swap. No Problem there.

BUT - whenever I do the same thing to the second drive, the partitions on the first seem to disappear. Meaning, it doesn't say free space, and has two partitions listed, but the / and the swap (last items in each row) have moved to the second drive partitions. I am not sure if this is how it is supposed to be since the pictures in the linked guide to not show what it looks like after that. THis is driving me crazy and I have to have it set up in RAID 1 and unsure as to what it is I am missing.

View 1 Replies View Related

Fedora :: MDADM On 12 64bit - Error "mdadm: Cannot Add Disks To A 'member' Array, Perform This Operation On The Parent Container"

Nov 22, 2009

Here's a brief description of my system:

120GB Sata HDD - Primary OS drive
3 x 1.0TB Sata HDD - Raid 5 array

This is on a C2D MSI P35 Platinum board. Anyway, did a fresh install of F12 on the 120GB, which I had problems with - Anaconda refused to see the drive. Fedora Live could see it fine, and it was listed as an 'nvidia_raid_member' - no idea why, but I completely erased the disc under the Live CD and proceeded to install F12.

Once F12 was installed, I loaded up mdadm to re-activate my Raid 5 array, using 'sudo mdadm --assemble --uuidthe uuid) - and it started with only 2 of the 3 drives. My /dev/sdb drive did not activate into the array, due to what mdadm said was a mismatched UUID. Ok, so I erased /dev/sdb, intending to rebuild the array. Erased /dev/sdb, and then attempted 'sudo mdadm --add /dev/md0 /dev/sdb' and I get this error: "mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container" - I can find NO information on this error message.

[Code].....

I don't believe the hard drives are connected in the exact same order they were in before - I disconnected everything in the system and blew it out (it was pretty dusty)

View 1 Replies View Related

Ubuntu Servers :: RAID5 - Re-adding Drive As Active, Not Spare?

Jan 13, 2011

I have a little nice Ubuntu server with 6x 1TB drives assebmbled into a RAID5 array. Recently SATA cable of one of the drives failed. So I ordered a new cable and ran the server in degraded mode for a few days. Like this:

Code:
/dev/md0:
Version : 00.90
Creation Time : Sat Sep 19 10:39:11 2009
Raid Level : raid5
Array Size : 4883812480 (4657.57 GiB 5001.02 GB)
code....

I'd like the 6th drive to be active, not spare, like before. Should I just wait for rebuild to be finished (it can easily take over 1 day)? Or should I add it somehow differently to be active immediately?

I'm not sure, but I think as I simulated failures unplugging one of the disk, after plugging it in again, the "failed" drive was active again and rebuilding was started as well of course. But it was 2 years ago, so...

The array works just fine for now - I can access files, etc. But I suspect, that in this state if another cable or drive fails, it won't survive anymore. Even after rebuilding is finished, but the 6th drive stays is still marked as "spare". Right?

View 4 Replies View Related

Server :: Mdadm Cannot Grow Raid1 Over Lvm?

Mar 31, 2011

I've 2 servers (xen1 and xen2 - their hostnames) with perversion configuration below: Each server have 4 SATA disks, 1 Tb each.

16 Gb ddr3
debian squeeze x64 installed:
root@xen2:~# uname -a
Linux xen2 2.6.32-5-xen-amd64 #1 SMP Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux

Storage configuration: Former 256 Mb + 32 Gb of 2 of 4 disks are used as raid1 devices for /boot and swap respectively. The rest of space, 970 Gb on all 4 sata disks are used as raid10. There is LVM2 installed over that raid10. Volume group is named xenlvm (that servers are expected to use as xen 4.0.1 hosts, but the story is not about Xen troubles). / , /var, /home are located on logical volumes of small size (just found out I got mixed up with lv names and partitions, but that's not the problem, I think):

[Code]...

View 3 Replies View Related

Server :: RAID 6 Array Coming Up With All Disks As Spare

Mar 25, 2011

I have been running a server with an increasingly large md array and always been plagued with intermittent disk faults. For a long time, I've attributed those to either temperature or power glitches. I had just embarked on a quest to a) lower case and drive temperature. They were running between 43 and 47C, sometimes peaking at 52C, so I've added more case fan power and made sure the drive cage was in the flow (it has it's own fan, too). Also, I've upgraded my power supply and made very sure that all the connectors are good. The array currently is a RAID6 with 5 Seagate 1,5TB drives.

When everything seemed to be working fine, I looked at my SMART logs and found that two of my drives (both well over 14000 operating hours) were showing uncorrectible bad blocks. Since it's RAID6, I figured, I couldn't do much harm, ran a badblocks test on it, zeroed the blocks that were reported bad, figuring the drive defect management would remap them to a good part of the disk and zeroed the superblock. I then added it back to the pack and the resync started. At around 50%, a second drive decided to go and shortly thereafter a third. Now, with two out of five drives, RAID6 will fail. Fine. At least, no data will be written to it anymore, however, now I cannot reassemble the array anymore.

Whenever I try I get this:
Code:
mdadm --assemble --scan
mdadm: /dev/md1 assembled from 2 drives and 2 spares - not enough to start the array

Code:
cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [linear]
md1 : inactive sdf1[4](S) sde1[6](S) sdg1[1](S) sdh1[5](S) sdd1[2](S)
7325679320 blocks super 1.0
md0 : active raid1 sdb2[0] sdc2[1]
312464128 blocks [2/2] [UU]
bitmap: 3/149 pages [12KB], 1024KB chunk

Which is not fine. I'm sure that three devices are fine (normally, a failed device would just rejoin the array, skipping most of the resync by way of the bitmap) so I should be able to reassemble the array with the two good ones and the one that failed last, then add the one that failed during the resync and finally re-add the original offender. However, I have no idea how to get them out of the "(S)" state.

Code:
mdadm --examine /dev/sdd1
/dev/sdd1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : d79d81cc:fff69625:5fb4ab4c:46d45217 .....

View 2 Replies View Related

Ubuntu Servers :: Boot From Raid1 (mdadm) + Lvm

Aug 11, 2010

intending to set up an all-in-one server, i threw in the ubuntu server 10.04 (amd64) cd. during the text-install, i set up the device-topology below, and it worked.

[Code]....

then i tested my raid by hot-pulling off the sda wire (ouch). worked fine, system still worked, and it also managed rebooting from the left sdb (which of course showed up being sda, lacking the first drive). now i am trying to recover this pre-crash state. adding the first disk (showing up as sdb), i can add it to md0 and let it start syncronizing for 2 hours. but... i can?t boot anymore with the recovered first disk being sda...

at first, booting got stuck in an initrd-prompt after complaining it couldn?t find my sys-logical volume. after a lot of trial and error i don?t even get any complaints, just a black screen which would let me wait for a boot for weeks... so, my system does not boot from my first disk, whether i plug in the second or not. my second disk still boots. my last attempt to get booting fine again has been: zero sda?s first and last gigabyte to kill any ids duplicate sdb?s first cylinder to sda to make it bootable reinitialize sdb?s part.table using command o in fdisk for a new disk-id recreate sda1 partition add sda1 to md0

View 1 Replies View Related

General :: RAID1 Resync Slow Using Mdadm?

Mar 11, 2010

I am running Kernel 2.6.18-128.el5 on a 64bit quad core machine with 8GB RAM. Using "mdadm" I setup a RAID1 array between two Western Digital 1.5TB drives. The problem is that the resync is running VERY slow. Here is a current status.

[root@royalflush shared]# cat /proc/mdstat Personalities : [raid1]
md1 : active raid1 hdc5[1] hda5[0]
1304493952 blocks [2/2] [UU]
[=>] resync = 6.2% (81592192/1304493952) finish=4280156.0min speed=4K/sec
unused devices: <none>

View 1 Replies View Related

Software :: Moved To Mdadm Raid1 And Now Having Grub?

Nov 27, 2010

Posted this on the centos forum too, but I might get better attention here. I just moved my centos server to a mdadm raid1 array. I went partially after this guide: [URL].. What I did was to boot up a livecd and made three partitions on both of my empty disks, one for / one for swap and one for /vz (it's an openvz server). Made those partitions into seperate raid1 arrays and then rsync-ed everything from the old disk to the new partitions.

After I had moved everything I did chroot into the new raid array and edited both grub config files and fstab, according to the guide.

[Code]...

I have managed to run the system on the raid1 disks when using super grub2 disk off a cd, but it has it's own grub and can boot any distro, so I can see that the system is working fine, except for grub. I have tried installing grub both from a livecd (ubuntu 64bit) and when booted into the raid1 array, but it gives the same results as stated above.

View 2 Replies View Related

General :: Creating A RAID1 Partition With Mdadm On Ubuntu?

Jan 28, 2010

I'm trying to set up a RAID1 partition on my Ubuntu 9.10 workstation.On this dual-boot system, Ubuntu is running from a separate drive (/dev/sdc - an SSD that is quite small, which is why I need more disk space). Besides that, there are two traditional 500 GB hard drives, which have Windows 7 installed (I want to keep the Windows installation intact), and about half of the space unallocated. This space is where I want to set up a single, large RAID1 partition for Linux.

(This, to my understanding, would be software RAID, whereas the Windows partitions are on hardware RAID - I hope this isn't a problem... Edit: See Peter's comment. I guess this shouldn't be a problem since I see both drives separately on Linux.)On both disks, /dev/sda and /dev/sdb, I created, using fdisk, identical new partitions of type "Linux raid autodetect" to fill up the unallocated space.

Device Boot Start End Blocks Id System
/dev/sda1 1 10 80293+ de Dell Utility
/dev/sda2 * 11 106 768000 7 HPFS/NTFS

[code]....

But so is "Device or resource busy" when trying to create the RAID array. Quite strange.

Update: Could the device mapper have something to do with this? How do /dev/mapper and dmraid relate to all this mdadm stuff anyway? Both provide software RAID, but.. differently? Sorry for my ignorance here. Under /dev/mapper/ there are some device files that, I think, somehow match the 3 Windows RAID partitions (sd{a,b}1 through sd{a,b}3). I don't know why there are four of these arrays though.

$ ls /dev/mapper/
control isw_dgjjcdcegc_ARRAY1 isw_dgjjcdcegc_ARRAY3
isw_dgjjcdcegc_ARRAY isw_dgjjcdcegc_ARRAY2

View 2 Replies View Related

Ubuntu Servers :: Interpreting Mdadm RAID1 Status?

Feb 7, 2011

I have a RAID1 array, where mdadm states that one of the disks is "removed." Naturally, I assume one of the drives has failed. The mdadm --detail command tells me that the sda drive has failed. However, further inspection from the mdadm -E /dev/sdb1 command says that sdb1 disk has been removed. I am a bit confused. Can someone clarify which drive is failed? Am I misreading the command outputs?

Code:
sudo fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes

[Code]...

View 3 Replies View Related

Ubuntu Servers :: Mdadm Raid1 Doesn't Finish Syncing?

Nov 14, 2010

One of the hard drives in my server failed the other day, backups saved the day and downtime was only a few hours, but when setting up the new drive I went ahead and migrated to software RAID, in the hopes it may give me less downtime in the future when a drive fails. It all went rather well, but my main root partition won't finish syncing for some reason.

sda was the original drive with sda4 as /, sda1 as /boot, and sda2 as swap. sdb was the drive that failed and was replaced with the new drive. So I set up sdb with the same partitions of sda, added it to a RAID1 array, copied files from sda, and reboot to md4 as /, md1 as /boot, and md2 as swap. I added the sda partitions to the array, and the sync went off without a hitch on md1 and md2, md4 progresses well, but after a few hours /proc/mdstat just shows this:

Code:
root@d668:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb2[1] sda2[0]
9767424 blocks [2/2] [UU]

[Code]...

View 3 Replies View Related

Server :: Mdadm Software Raid1 Failed Disk Detection Too Long

Jul 22, 2011

I have SLES10-SP3 running on an Intel SR1600URHS board with 3 hot-swap SATA disks configured using mdadm as Raid1 with hot spare. If I pull one of the active disks, all file i/o will stop for about 2.5 minutes after which it will start again and the raid array will be rebuilt using the spare disk. Is there any way I can reduce this 2.5 minutes of inactivity? I've tried setting /sys/block/sdX/device/timeout and /sys/block/sdX/device/retries to 1 for all disks, but this hasn't made any difference. The output from messages is:

12:11:56: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2 frozen
12:11:56: ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 cdb 0x1e data 0
12:11:56: res 40/00:03:00:00:20/00:00:00:00:00/b0 Emask 0x4 (timeout)

[code]....

View 1 Replies View Related

CentOS 5 :: Ex3fs On Mdadm/raid1 Freeze/lock-ups On Heavy Write?

Feb 27, 2011

I've faced the problem with server freeze on heavy write.

System

CentOS 5.5 x64_86 with latest updates and kernel (2.6.18-194.32.1). Also tried 2.6.18-194.26.1 and 2.6.37-2 from ELRepo with the same results.
CPU: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
Memory: 3 x 2Gb DDR3.
HDDs: 2 x Western Digital WDC WD1002FBYS-02A6B0

[Code].....

View 19 Replies View Related

Fedora :: Can't Mount RAID1 Array?

Dec 29, 2010

I have just upgraded to Fedora 14 from an older version. I now have problems mounting my RAID1 array, which was operating correctly until now. This is a software RAID which was initially built under Fedora 10.The array is md0, and is made of 2 SATA drives (sdc and sdd) which have only one partition. The underlying filesystem is NTFS. The array is assembled correctly and active, as reported by /proc/mdstat and mdadm -D.When I try to mount the array, I get this:

Code:
[root@Goofy ~]# mount /dev/md0 /mnt/raid
mount: you must specify the filesystem type

[code].....

View 4 Replies View Related

Server :: Debian RAID 10 Spare Drives Versus Active Drives

Jun 9, 2011

so I setup a raid ten system and I was wondering what that difference between the active and spare drives is ? if I have 4 active drives then 2 the two stripes are then mirrored right?

root@wolfden:~# cat /proc/mdstat
Personalities : [raid0] [raid10]
md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]

[code]....

View 2 Replies View Related

Ubuntu :: RAID1 Array Fails On Reboot?

Sep 8, 2010

I did a search and couldn't find anything pertaining to this - if I've missed something please direct me in the right direction We have an Ubuntu box set up as a headless office server (latest desktop install of Ubuntu) and we recently set up two 1TB HDDs in a RAID1 array using mdadm - as far as I can tell it worked successfully and created /dev/md0 with an ext3 file system. After sharing the drive I can see it from the other office computers and could transfer data to and from the RAID array just fine.

I didn't figure out how to get it to automatically mount on boot so I restarted it to see if it would do so by default - however, when I restarted I couldn't see the RAID array any longer on the desktop and it came up as a 0.0kb RAID array in Disk Utility, saying it was broken. It wouldn't let me check it until I stopped and restarted the array.

After restarting I hit "check array" and it appears to be repairing the drives. What have I missed? What happened here? How can I fix it? What other info can I provide to assist:

sudo blkid shows:

/dev/sda1: UUID: "<snip>" TYPE="ext3" (system HDD)
/dev/sda5: UUID: "<snip>" TYPE="swap"
/dev/sdc1: UUID: "<snip>" TYPE="linux_raid_member"
/dev/sdb1: UUID: "<snip>" TYPE="linux_raid_member"
/dev/md0: UUID: "<snip>" SEC_TYPE="ext2" TYPE="ext3"

disk utility: RAID Array: Mirror (RAID-1), Metadata version 0.90.0, Partitioning: Not Partitioned, Components: 2...

View 1 Replies View Related

Hardware :: Intermittently Clicking HDD On A RAID1 Md Array

Mar 13, 2011

One of my HDD on a 2 drive RAID1 md (linux software raid) is intermittently clicking. When this occurs, I can hear a loud clicking noise at uniform intervals and then it stops. Its like "click..click...click..click...click..click...click..click". You know what I mean

It does it about 4 times per hour. I believe this drive is about to die.

Until I find a replacement drive, can I run into problems with the data on the array? I believe the mdadm utility would tell me if the drive was faulty and once I replace the drive, it would auto rebuild the array (re=copy the data to the new drive)?

I have over 1.2TB of data on this array I really dont want to lose everything...

View 5 Replies View Related

General :: Create RAID1 Array With Given/fix HD Labels?

Aug 19, 2010

I have two HDs (let's say sda and sdb). Both are the same size and have the same partitions already (sda1/sda2/sda3 and sdb1/sdb2/sdb3). Basically they are ready to make a RAID1 array.

Writing with new udev rules, I could create and give fix HD labels with /sbin/scsi_id.

Example: For sdb1 I have a fix device name created under /dev as hd2_boot1, for sdb2 I have /dev/hd2_boot2 and finally for sdb3 I have created the device /dev/hd2_boot3.

With using the command "mdadm --create /dev/md0 --level=1 ....", I could create a RAID array.

But, when I check the status one of the RAID devices, like with the command "mdadm --detail /dev/md2", it still shows me as part of the RAID array the sdb* devices, not the hd2_boot* devices. Something like this:

I would like to see basically as member or the RAID array always the /dev/hd2_boot3 not the /dev/sdb3 (like above), is this possible?

Bottom line, I would like to keep the order of the RAID arrays depending their scsi ids, not depending their scsi numberings which is given by the kernel, since the scsi numberings (sda, sdb, sdc and etc.) can change depending the physical connection.

View 2 Replies View Related

Ubuntu :: Raid1 Array Won't Start On Boot

Jan 2, 2011

I created a raid1 disk with Disk Utilities on with Karmic, then upgraded my system to Lucid then Maverick. This Raid disk is just a data store, I'm not booting off of it. when I reboot the raid1 disk does not start. I have to go into Disk Utilities, stop the array and then start it. Then it comes up and I can mount it. I ran dpkg-reconfigure mdadm and it created a valid entry in mdadm.conf. but the array still does not start on boot. I want to have it auto mount with fstab but need to make sure the array starts first.

View 12 Replies View Related

Server :: MDADM RAID6 Active Despite 3 Drive Failures

Jul 26, 2011

I am currently having problems with my RAID partition. First two disks were having trouble (sde, sdf). Through smartctl I noticed there were some bad blocks, so first I set them to fail, and readded them so that the RAID array will overwrite these. Since that didn't work, I went ahead and replaced the disks. The recovery process was slow and I left things running overnight. This morning I find out that another disk (sdb) has failed. Strangely enough the array has not become inactive.

md3 : active raid6 sdf1[15](S) sde1[16](S) sdak1[10] sdj1[8] sdk1[9] sdb1[17](F) sdan1[13] sdd1[2] sdc1[1] sdg1[5] sdi1[7] sdal1[11] sdam1[12] sdao1[14] sdh1[6]
25395655168 blocks level 6, 64k chunk, algorithm 2 [15/12] [_UU__UUUUUUUUUU]

Does anyone have any recommendations as the steps to take ahead with regards to recovery/fixing the problem? The disk is basically full so I haven't written anything to disk in the interim of this problem.

View 2 Replies View Related

Ubuntu Installation :: Booting When RAID1 Array Is Present?

Feb 24, 2011

I'm sorry if this is the wrong section and if there is another thread on the matter. I searched but couldn't find threads with my specific problem. I've just installed Ubuntu 10.10 Server 64 bit which I intend to use as a internal file server.

The hdd setup is:
500gb system disk
1tb storage
2tb storage (2*2tb using built-in motherboard hardware RAID1) When the installation was complete and the computer rebooted I got an error message saying "error: no such disk". After re-installation I got the same message and I then tried disconnecting all the storage devices and it booted perfectly. I then tried connecting up the 1tb drive and again it booted as it should. But when I re-connected the RAID:ed disks the error message re-appeared.

View 5 Replies View Related

CentOS 5 :: Permanently Remove Drive From Md Array (RAID1)

May 14, 2011

I installed a distro based on CentOS 5.5 (FreePBX distro FYI). It used an automated kickstart script to create an md RAID1 array of all the hard drives connected to the machine. Well, I installed from a thumb drive, which the script in interpreted as a hard drive and thus included in the array. So, I ended up with three md arrays (boot, swap, data) that included the thumb drive. Even better, it used the thumb drive for grub boot so I couldn't start up without it. I was able to mark the USB drive as 'failed' and remove from each array, and even change grub around to boot without the usb drive, but now each of the arrays is marked as degraded:

[Code]...

View 1 Replies View Related

Ubuntu :: Recreating Mdadm Array In 10.04 After Upgrade From 9.04?

Jul 31, 2010

I had a raid array working great in 9.04 with mdadm, and I just recently upgraded to 10.04 (clean install) and I'm trying to reassemble the array and having a dickens of a time.When I try to recreate the array with:

Code:
sudo mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdd /dev/sdc
I get this:

[code]....

View 9 Replies View Related

Ubuntu :: How To Auto Start Array In Mdadm

Apr 10, 2011

I have created a RAID 5 array using the built in Disk Utility.This is great and formatted it with ext4, and mounted it.However on reboot in Disk Utility as RAID Array is not running and under state Not running, partially assembled.I have to stop the array then restart it, then mount it before I can access what is on it.Obviously this is not very good as I often have the system shutdown at night to converse energy, and having to do this every time it boots is a pain.Could someone please explain in plain english what I need to go to get my array to start and mount on startup

View 4 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved