Ubuntu Servers :: RAID-5 Recovery (spare/active) / Degraded And Can't Create Raid ,auto Stop Raid [md1]?
Feb 1, 2011
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
I was able to examine the disks though:
Code:
root@127.0.0.1:/etc# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
code....
Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
View 4 Replies
ADVERTISEMENT
Jun 11, 2010
so my servers 7 hds in raid 5 all was working well until one of them died. The HD that died sort of works it can read like half a file also freezes on the benchmark test in disk utility. Unfortunate when i take it out on boot it says. The drive for /media_kbt is not ready or present press s to skip or m for manual recovery. I hit s and then go to disk utility. But i can't start or add disks to the array.
Here is me trying to do random stuff
Code:
administrator@3dslice-host:~$ sudo mdadm --stop /dev/md0
[sudo] password for administrator:
mdadm: metadata format 00.90 unknown, ignored.
mdadm: stopped /dev/md0
administrator@3dslice-host:~$ sudo mdadm --add /dev/md0 /dev/sda1
mdadm: metadata format 00.90 unknown, ignored.
[Code]...
View 2 Replies
View Related
Jun 26, 2011
Ubuntu Server 11.04 i386. I've used linux on and off for years but only in small doses, so I'm really just at newbie level. I was running an Openfiler NAS, but decided to give Ubuntu+Webmin a try. And up 'til now I've been happy with progress. I have set up a RAID-6 array using 5 x 1TB SATA drives. I've ensured that the array is in a "clean" state, and now I want to do some failure testing. The problem occurs when I remove one of the drives in the array. I shutdown, remove a drive, then boot up. The array wont start at all, and comes up with this error during boot:
Quote:
the disk drive for /mnt/raidvol1 is not ready yet or not present
Continue to wait; or Press S to skip mounting or M for manual recovery
If I wait, nothing happens. Obviously the RAID array should start in degraded mode, but it fails to mount at all. When I press "M" to go into manual recovery and type "mount -a" I get the response:
Quote:
mount: special device /dev/RAIDVG1/RAIDLV1 does not exist
I have set BOOT_DEGRADED=true in /etc/initramfs-tools/conf.d/mdadm without success. If I reconnect the disconnected drive, the array works fine, and is in a clean state.
View 9 Replies
View Related
Jun 9, 2011
so I setup a raid ten system and I was wondering what that difference between the active and spare drives is ? if I have 4 active drives then 2 the two stripes are then mirrored right?
root@wolfden:~# cat /proc/mdstat
Personalities : [raid0] [raid10]
md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
[code]....
View 2 Replies
View Related
Jul 18, 2009
how can I create RAID 1+0 using two drives (one is with data and second one is new). Is it possible to synchronize data drive with empty drive and create RAID 1+0 ?
View 3 Replies
View Related
Sep 1, 2011
I recently upgraded from 10.04 to 11.04 and I now often get boot messages about a degraded raid.
I'm fairly experienced, but I'm confused which raid it is talking about. I have a raid5 array, but I don't boot of that, and it seems fine when I finally get it to boot. Previously, I didn't have any other raid arrays[1], but now I seem to have two others called md126 and md127, they both seem to be degraded. Where did they come from?
[1] I *do* have two 80GB drives that I was booting from in RAID1, but that was a looong time ago, and I have since only booted from one of them. The partition table indeed shows partitions 1 and 5 are raid autodetect and /proc/mdstat shows they are degraded ([U_]). Could it be that this is causing the problem? If so, why has this only started to happen since the upgrade from 10.04 to 11.04?Anyway, perhaps it is a good idea to add in that second disk to the raid1 array. If so, how to do that? Note that, I've also noticed that when I boot and get to the screen when I select from the different kernel versions, I now get a couple of really old ones too - my thought is that these are from the raid1 disk that I stopped using. If I add it to the array, how can I be sure it will mirror in the correct direction?
It could be that I have fairly recently plugged in that second RAID1 disk, after a long time of not having enough spare sata sockets (I switched my RAID5 array from 8 disks to only 3 disks, so suddenly had a lot more spare sockets).
View 9 Replies
View Related
Jun 20, 2010
Basically, I installed Debian Lenny creating two RAID 1 devices on two 1 TB disks during installation. /dev/md0 for swap and /dev/md1 for "/"
I did not pay much attention, but it seemed to work fine at start - both raid devices were up early during boot, I think. After that I upgraded the system into testing which involved at least upgrading GRUB to 1.97 and compiling & installing a new 2.6.34 kernel ( udev refused to upgrade with old kernel ) Last part was a bit messy, but in the end I have it working.
Let me describe my HDDs setup: when I do "sudo fdisk -l" it gives me sda1,sda2 raid partitions on sda, sdb1,sdb2 raid partitions on sdb which are my two 1 TB drives and sdc1, sdc2, sdc5 for my 3rd 160GB drive I actually boot from ( I mean GRUB is installed there, and its chosen as boot device in BIOS ). The problem is that raid starts degraded every time ( starts with 1 out of 2 devices ). When doing " cat /proc/mdstat " I get "U_" statuses and 2nd devices is "removed" on both md devices.
I can successfully run partx -a sdb, which gives me sdb1 and sdb2 and then I readd those to raid devices using " sudo mdadm --add /dev/md0 /dev/sda1 ". After I read devices it syncs the disks and after about 3 hours I see fine status in mdstat. However when I reboot, it again starts with degraded array. I get a feeling that after I read the disk and sync array I need to update some configuration somewhere, I tried to " sudo mdadm --examine --scan " but its output is no different from my current /etc/mdadm/mdadm.conf even after I readd the disks and sync.
View 1 Replies
View Related
Jun 20, 2011
I have Ubuntu 10.10 (maverick) and I'm using mdadm 3.1.4. I have created a RAID 10 using imsm as the metadata, and this runs successfully if I assemble the RAID from the command line. My question is: How do I get the RAID to auto assemble when I boot the system? I would like to be able to mount the RAID when the system starts up (but the OS is on a separate disk altogether). What I've found out so far is that I must do a Code: update-initramfs -k all -u AFTER I've modified the /etc/mdadm/mdadm.conf file. This is what I've got in my configuration file:
[Code]....
BUT what happens is that when I reboot my server, and then do "ls -l /dev/md*" there is no listing at all for any RAID devices. I actually have to run the assemble command on command line and that is when I see listings for RAID devices under /dev. What am I missing?
View 9 Replies
View Related
Aug 4, 2010
I want to build a 6xSATA RAID 5 system with on of the disks as spare disk. I think this give me a chance of 2 of 6 disks failing without losing data. I am right?
Hardware: Intel ICH10R
First I will creat a 3xSATA RAID 5, after I will add the spare disk and after that I will add the others disks. This is what I think I should do.
Step 1:
Create RAID Device
Code:
mdadm --create --verbose /dev/md0 --metadata 1.2 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1
I read that "--metadata 1.2" is the best option. It is true?
Create filesystem on the RAID device
Using this method of calculation:
* chunk size = 128kB (for RAID 5)
* block size = 4kB (recommended for large files, and most of time)
* stride = chunk / block = 128kB / 4k = 32kB
* stripe-width = stride * ( (n disks in raid5) - 1 ) = 32kB * ( (5)- 1 ) = 32kB * 4 = 128kb
Then:
Code:
mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=128 /dev/md0
Step 2:
Add spare-disk
Code:
mdadm --add /dev/md0 /dev/sdd1
Is this enough?
Step 3:
Adding disks:
Code:
mdadm --add /dev/md0 /dev/sde1
mdadm --grow /dev/md0 --raid-devices=4
fsck.ext3 /dev/md0
resize2fs /dev/md0
View 1 Replies
View Related
May 3, 2011
I have a dell with win7 installed on two disks as raid0. I added a 3rd HD and installed 11.04 on to it. This boots ok but only Linux shows in the grub menu. Win7 is not there and I have no way to boot it as the installer put grub on disk0 of the raid overwriting the windows boot prog.
View 9 Replies
View Related
Apr 9, 2011
I have a 3ware controller that has a RAID 1 of two SATA disks.After an outage, the linux box (which is running ubuntu), restarted and the partition is now mounted read only. I only have the "/" mount point (this is a test server).Now, if I go to the 3ware controller by pressing ALT-3 while booting, I don't see any indication that there is something wrong with the disks.If I let the computer boot, I'm asked by fdisk if I want to fix/ignore/etc the inconsistencies found.
View 1 Replies
View Related
Aug 31, 2010
I have been having this problem for the past couple days and have done my best to solve it, but to no avail. I am using mdadm, which I'm not the most experienced in, to make a raid5 array using three separate disks (dev/sda, dev/sdc, dev/sdd). For some reason not all three drives are being assembled at boot, but I can add the missing array without any problems later, its just that this takes hours to sync. Here is some information:
[Code]....
View 11 Replies
View Related
Jun 24, 2010
So, I have raid5 array with one spare disk. Is it possible to spin-off/shutdown the spare disk until one drive fails and the spare disk is needed?
View 1 Replies
View Related
Mar 25, 2011
I have been running a server with an increasingly large md array and always been plagued with intermittent disk faults. For a long time, I've attributed those to either temperature or power glitches. I had just embarked on a quest to a) lower case and drive temperature. They were running between 43 and 47C, sometimes peaking at 52C, so I've added more case fan power and made sure the drive cage was in the flow (it has it's own fan, too). Also, I've upgraded my power supply and made very sure that all the connectors are good. The array currently is a RAID6 with 5 Seagate 1,5TB drives.
When everything seemed to be working fine, I looked at my SMART logs and found that two of my drives (both well over 14000 operating hours) were showing uncorrectible bad blocks. Since it's RAID6, I figured, I couldn't do much harm, ran a badblocks test on it, zeroed the blocks that were reported bad, figuring the drive defect management would remap them to a good part of the disk and zeroed the superblock. I then added it back to the pack and the resync started. At around 50%, a second drive decided to go and shortly thereafter a third. Now, with two out of five drives, RAID6 will fail. Fine. At least, no data will be written to it anymore, however, now I cannot reassemble the array anymore.
Whenever I try I get this:
Code:
mdadm --assemble --scan
mdadm: /dev/md1 assembled from 2 drives and 2 spares - not enough to start the array
Code:
cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [linear]
md1 : inactive sdf1[4](S) sde1[6](S) sdg1[1](S) sdh1[5](S) sdd1[2](S)
7325679320 blocks super 1.0
md0 : active raid1 sdb2[0] sdc2[1]
312464128 blocks [2/2] [UU]
bitmap: 3/149 pages [12KB], 1024KB chunk
Which is not fine. I'm sure that three devices are fine (normally, a failed device would just rejoin the array, skipping most of the resync by way of the bitmap) so I should be able to reassemble the array with the two good ones and the one that failed last, then add the one that failed during the resync and finally re-add the original offender. However, I have no idea how to get them out of the "(S)" state.
Code:
mdadm --examine /dev/sdd1
/dev/sdd1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : d79d81cc:fff69625:5fb4ab4c:46d45217 .....
View 2 Replies
View Related
Apr 18, 2011
When I set up my current workstation (now at CentOS 5.6) I connected the two SATA drives in a RAID 1 configuration. Later, I realized I had a spare EIDE drive , so installed it, partitioned, and added it as a hot spare using "mdadm -a". So, now I have three disk drives doing the work of one.I updated /etc/mdadm.conf using the output from mdadm --examine --scan, and the resulting lines are:ARRAY /dev/md0 level=raid1 num-devices=2 UUID=1f00f847:088f27d2:dbdfa7d3:1daa2f4e spares=1ARRAY /dev/md1 level=raid1 num-devices=2 UUID=f500b9e9:890132f0:52434d8d:57e9fbfc spares=1I also updated the /boot/grub/device.map file to read:
(hd0) /dev/sda
(hd1) /dev/sdb
(hd2) /dev/hda
[code]....
View 4 Replies
View Related
Jan 11, 2011
I'm using a Intel mac pro running Ubuntu Server 10.4 64bit, and I have it working. Currently, I'm just using Ubuntu using bootcamp, and not using EFI. The OS drive is 250gb which I know is more than enough, but it was what I had free at the time. I added 3 1tb drives to the computer, but not sure how to create a raid with them. I've done some searching, but still haven't been able to get it done successfully.
View 3 Replies
View Related
Jul 18, 2011
I have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer!
So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.
View 2 Replies
View Related
Jun 29, 2011
migrate an installed Ubuntu system from a software raid to a hardware raid on the same machine? how would you go about doing so?
View 1 Replies
View Related
Aug 4, 2010
I want to build a 6xSATA RAID 5 system with on of the disks as spare disk. I think this give me a chance of 2 of 6 disks failing without losing data. I am right? Hardware: Intel ICH10R First I will creat a 3xSATA RAID 5, after I will add the spare disk and after that I will add the others disks. This is what I think I should do.
[code].....
EDIT:I'm thinking in put LVM with the RAID 5. What are the advantages and disadvantages of it (LVM over RAID 5). It is safe? My MB (Asus P5Q) have Chipset Intel P45 ICH10R. What kind of RAID have it? Hardware RAID, fake RAID, BIOS RAID, software RAID? This are the specs of storage
[code].....
View 1 Replies
View Related
Nov 26, 2010
I have installed Ubuntu on my m1530 since 8.04 and currently dual boot Win7 and 10.10. I would like to dual boot on my PC, but I have run into a problem. I am not a pro at Ubuntu, but this problem I can not solve by reading forums like I have in the past.
I realize this is a common problem, but I have noticed people having success.
I have a M4A87TD EVO MB with two Seagate drives in Raid 0. (The raid controller is a SB850 on that MB) I use the raid utility to create the raid drive that Windows7x64 uses. I have 2 partitions and 1 unused space. Partition 1 is Windows, partition 2 is for media, and the remaining unused space is for Ubuntu.
I am running ubuntu-10.10-desktop-amd64 off a Cruzer 16GB flash drive that was installed via Universal-USB-Installer-1.8.1.4.
My problem like so many others is that when I load into Ubuntu, gparted detects two separate hard drives instead of the raid. I read that this is because kpartx is not installed on 10.10. I then went in LiveCD mode and downloaded kpartx from Synaptic Manager. Gparted still reported two drives. I opened terminal and run a few commands with kpartx. I received an error. (Forgive me I didn't write it down, but I believe it said something about a communication error. I will try again later and see.)
Currently I am reflashing the Cruzer with a persistence of 4GB. I am not familiar with this process, but I understand that my LiveCD boot will save information I download to it. I decided to try this method because I was going to install kpartx and reboot to see if this made a difference.
I am looking for any suggestions on a different method or perhaps someone to tell me that the raid controller or some hardware isn't supported. I did install ubuntu-10.10-alternate-amd64 on my flash drive, but fail to get past detecting my CD-ROM drive since it's not plugged in. If this method is viable, I will plug it in. I also watched the ..... video were a guy creates Raid 0 with the alternated CD, but it wasn't a dual boot and didn't use a raid controller from a MB.
View 6 Replies
View Related
Mar 22, 2011
How long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?
View 1 Replies
View Related
Nov 17, 2010
I have 10.04.1 on my server with a 250gb sata drive. I have all my files on this hard drive. I'm running out of space so I have another 250gb sata drive I need installed. I want to create raid 0 so I can expand my servers hard drive space. I don't want to lose my data on original hard drive or reinstall to create the raid. Is there a way to achieve this with mdadm without altering the first hard drives data?
View 2 Replies
View Related
May 10, 2011
I have a server that has one drive with Ubuntu already loaded on it. I would like add another drive and then create a mirrored RAID between the two.
View 4 Replies
View Related
Aug 3, 2010
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
View 3 Replies
View Related
Feb 5, 2011
I am trying to create a new mdadm RAID 5 device /dev/md0 across three disks where such an array previously existed, but whenever I do it never recovers properly and tells me that I have a faulty spare in my array. More-specific details below. I recently installed Ubuntu Server 10.10 on a new box with the intent of using it as a NAS sorta-thing. I have 3 HDDs (2 TB each) and was hoping to use most of the available disk space as a RAID5 mdadm device (which gives me a bit less than 4TB.)
I configured /dev/md0 during OS installation across three partitions on the three disks - /dev/sda5, /dev/sdb5 and /dev/sdc5, which are all identical sizes. The OS, swap partition etc. are all on /dev/sda. Everything worked fine, and I was able to format the device as ext4 and mount it. Good so far.
Then I thought I should simulate a failure before I started keeping important stuff on the RAID array - no point having RAID 5 if it doesn't provide some redundancy that I actually know how to use, right? So I unplugged one of my drives, booted up, and was able to mount the device in a degraded state; test data I had put on there was still fine. Great. My trouble began when I plugged the third drive back in and re-booted. I re-added the removed drive to /dev/md0 and recovery began; things would look something like this:
Code:
user@guybrush:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc5[3] sdb5[1] sda5[0]
3779096448 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[Code]...
View 9 Replies
View Related
Jun 12, 2010
I have two 1TB hard drives in a RAID 1 (mirroring) array. I would like to add a third 1TB drive and create a RAID 5 with the 3 drives for a 2TB system. I have ubuntu installed on a separate drive. Is it possible to convert my RAID 1 system to a RAID 5 without losing the data? Is there a better solution?
View 1 Replies
View Related
Feb 28, 2011
If I have a windows installed in raid-0, then install virtualbox and install all my linux os,s to virtualbox will they be a raid-0 install without needing to install raid drivers?
View 1 Replies
View Related
Dec 10, 2009
I am going to be using CentOs 5.4 for a home storage server. It will be RAID6 on 6 x 1TB drives. I plan on using an external enclosure which is connected via two SFF-8088 cables (4 drives a piece). I am looking to try and find a non-RAID HBA which would support this external enclosure and allow to use standard linux software raid.
If this is not an option, I'd consider using a hardware based raid card, but they are very expensive. The Adaptec 5085 is one option but is almost $800. If that is what I need for this thing to be solid then that is fine, I will spend the money but I am thinking that software raid may be the way to go.
View 3 Replies
View Related
Sep 15, 2010
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
View 4 Replies
View Related
Feb 22, 2010
This is my first post on LinuxQuestions.org, so I hope I am posting this this in the right forum, and English is not my mother language, so I hope I made myself understandable. I have this small Poweredge 350 server with two 80GB IDE hard drives running Red Hat Fedora 5, with software Raid 1. Some months ago hda started to have unrecoverable I/O read errors for some blocks. I leaved it that way for some time then hdb also started having same kind of issues. I got new hard drives and replaced hdb first, but the problem is mdadm hasn't been able to recover the raid. I have three LINUX RAID partitions conforming the raid 1 between hda and hdb
/dev/md0
/dev/md1
/dev/md2
[code]....
So how can I force the raid to be recovered ignoring the hda errors, is there a way to recover this?
View 7 Replies
View Related