General :: Mdadm: Removed Two Drives (still Valid) From Raid-5 And Need To Add Them Back In?
Mar 11, 2011
I have a 4 disk raid 5 array on my Ubuntu 10.10 box. They are /dev/sd[c,d,e,f].Smartctl started notifying me that /dev/sde had some bad sectors and the number of errors was increasing each day. To mitigate this I decided to buy a new drive and replace it.I have an external 4-bay disk enclosure. I failed /dev/sde via mdadm:
Code:
mdadm --manage /dev/md0 --fail /dev/sde
mdadm --manage /dev/md0 --remove /dev/sde
[code]...
View 5 Replies
ADVERTISEMENT
Jan 16, 2010
I am trying to set up a mdadm raid in a new PC that I am building for home theatre. the machine boot just fine from /dev/sdc running ubuntu 9.10 However in gparted /dev/sda and dev/sdb show to be part of /devmapper/sil_ajbicfacbaej Both dev/sda and /dev/sdb were drives that used to be part of a sil hardware raid on a previous machine. I would like to use them as a new mdadm raid on this new machine the old hardware card was really quite slow. the drives are now pluged into the MB and should bw much faster there.
fdisk -l shows this
*********************************************** ~$ sudo fdisk -l
Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
[code]....
View 6 Replies
View Related
Apr 24, 2011
Its from a Synology Box with 3 disks, which one is damaged. But this disk wasnt in use.Take a look on the raid-size of 493 GB - and the both available disks with 250GB..)
On the others there were a linear raid. during this damaged disk the synology-device tells me, that the volume was crashed.But it look like, that this disk was not mounted into this volume.Quote:
DiskStation> mdadm --detail /dev/md2
/dev/md2:
Version : 00.90
[code]....
View 3 Replies
View Related
Nov 30, 2010
I am learning software raid 1 with centos 5.5. I created the raid with out any problems and removed the first drive to check there was no problems and it booted. I have installed the old drive back in the system as hdc and need to resync the drives (used old drive as partitions correct) I thought I could use raidhotadd but id does not seem to exist anymore. how I resync the drives in the array hda primary and hdc secondary using mdadm
View 1 Replies
View Related
Feb 20, 2011
This is message I get when I try and start itmdadm: /dev/md0 assembled from 2 drives - not enough to start the arrayBelow is the information I've collected about any help on how I can get the raid back up and going to I can get the data off of it would be awesome
sudo mdadm --examine --scan -v
ARRAY /dev/md0 level=raid10 num-devices=4 UUID=91c36708:a7cbb532:5b51dc92:ba008491
devices=/dev/sdd1,/dev/sdc1,/dev/sdb1,/dev/sda1
[code]...
View 1 Replies
View Related
Jul 11, 2010
I'm writing a monitoring plugin for a home server RAID, mdadm on Ubuntu 10.4. code...
I'm looking for the possible values of "state" but can't seem to find it anywhere, neither man nor the online documentation I have found seem to have a list.
Does anyone know where to find a list of possible states?
View 1 Replies
View Related
Jul 18, 2011
I have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer!
So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.
View 2 Replies
View Related
Aug 14, 2010
I'm running a Debian homeserver, with a 3-disk (1GB each) raid 5 array using mdadm (the OS is on a separate disk).Now, smartmontools noticed some bad sectors on one of the disks, and I'm not sure what to do next (except for backup of valuable data).I found some articles on how to fix these sectors, but I'm unaware what the result on the whole array will be.
View 4 Replies
View Related
Apr 8, 2010
i have cretaed RAID on one of my server RAID health is ok but its shows warning. so what could be the problem. WARNING: 0:0:RAID-1:2 drives:153GB:Optimal Drives:2 (11528 Errors)
View 1 Replies
View Related
Apr 7, 2011
I am trying to create a Raid 1 ram disk. Below are the commands I used:
[root@abidbodal dev]# mke2fs -m 0 /dev/ram8
[root@abidbodal dev]# mount /dev/ram8 /mnt/rd8
[root@abidbodal dev]# mke2fs -m 0 /dev/ram9
[code]....
View 3 Replies
View Related
Sep 28, 2009
That upgrade I did removed mdadm package and I have a raid array. Did the mdadm package get replaced? I'm fearful that if I need to reboot or the system looses power, that I won't be able to access the array. When I try to (re)install the package, it wants to remove many that I also need.
[Code]...
0 upgraded, 1 newly installed, 56 to remove and 0 not upgraded. Need to get 433kB of archives. After this operation, 250MB disk space will be freed. Do you want to continue [Y/n]? n
View 1 Replies
View Related
Aug 15, 2010
I'm building a new desktop computer, on which I plan to install Debian Squeeze. I'll have a 1 TB SATA hard drive in the system. I'm also considering using two 500 GB external USB drives, but I'm debating about how I want to use them. Running them all separately for 2 TB of space could be a nightmare, with three potential points of failure, so I was thinking of using the two external drives as a backup system instead.
I'm considering linking the two external drives in a RAID 0 array, then linking that array and the internal drive in a RAID 1 array. I would use mdadm software RAID for all of this so I could use individual partitions in the arrays, avoid hardware dependency, and have greater software control. So now is this feasible to do (having a partial RAID 0+1 setup)? Moreover, what kind of performance could I expect from using potentially slow external drives (one of which I know has a very long spin-up time after idle periods) in a mirroring setup with the internal drive?Would I be far better off using a filesystem backup daemon instead?
EDIT:After some more research and brainstorming, I've decided I might just end up using rsync+cron, lsyncd, or DRBD (assuming it can easily make backups locally). I'd probably have to link up the external drives in RAID 0 (or use some filesystem link trickery). But I suppose such a setup would offer greater control, flexibility in disk capacities (the full system isn't so strictly limited to the capacity of the smallest member of the array), and granularity than RAID 0+1 would.I'm still open to thoughts on the mdadm RAID 0+1 solution, but does anyone have any advice on choosing backup software? For some background on my needs, I'll be using this computer as both an everyday desktop and a personal LAMP server (MySQL database files would be included in the backups).
View 6 Replies
View Related
Jan 15, 2010
centos 4.4 - 3 of 4 hard drives removed now won't boot- can't find lv so kernel panic
View 2 Replies
View Related
Jan 18, 2010
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
View 4 Replies
View Related
Jun 9, 2011
so I setup a raid ten system and I was wondering what that difference between the active and spare drives is ? if I have 4 active drives then 2 the two stripes are then mirrored right?
root@wolfden:~# cat /proc/mdstat
Personalities : [raid0] [raid10]
md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
[code]....
View 2 Replies
View Related
Aug 6, 2010
I'm trying to get mdadm to assemble all my drives with help of uuid.
When I use
$ sudo mdadm --assemble /dev/md0 /dev/sdb /dev/sdc /dev/sdd
mdadm: /dev/md0 has been started with 3 drives.
So the setup works. But I want to do it the proper way with UUID.
$ sudo blkid
[Code]...
View 1 Replies
View Related
Jun 5, 2011
I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it.I then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says : Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output errormdadm: Notenough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:
root@warren-P5K-E:~# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
[code]....
View 2 Replies
View Related
Mar 10, 2010
I have done lots of searching and I haven't been able to find anyone else with the same problem. Whenever I create a RAID with 'mdadm', regardless of level (I've done linear, 0, and 5) the command I use is:
Code:
mdadm --create --run --verbose /dev/md0 --raid-devices=11 --spare-devices=1 --chunk=256 --level=5 /dev/sd[abcdefghijkl]1
The RAID is build RAID 5 as it should be. However, when I check /proc/partitions it shows under "md3".
[Code]...
View 1 Replies
View Related
Oct 21, 2010
I have a previously defined RAID5 (4 disks). This worked in Ubuntu 8.04.. I recently moved to CentOS5.. and I cannot seen to get the drive back online.cat /proc/mdstat shows, no raid levels (personalities).. and no drives listed.mdadm --detail -scan returns nothing.mdadm --QE returned a UUID string.. and the ARRAY output.I can mdadm --examine all the members of the original array.I am not versed in mdadm enough to really understand what I can run and should not run that would erase the data on the drives. Please assist.. I will try to post exact output of commands.. but the system is kind of unreachable and being rebuilt... i just want to ensure my data on the array is not lost
View 7 Replies
View Related
Feb 20, 2011
My only goal is to have a raid-5 that auto-assembles and auto-mounts. Hardware: 4*2TB sata (raid disks), 1*500GB IDE (OS disk), 1*DVD IDE all plugged direct into the motherboard (nForce 750i SLI).
Starting partitions on the raid disks: gpt ext4 The problem occurs when I restart my comp after building it for the first time. I am able to see it assemble, I am able to partition it, I even mounted it Once.This is the second time I've built it so I have watched everything that happened. I don't know if this has anything to do with my problem, but when I created the raid my drive designations were: sda - 500GB(OS), sd[bcde] - 2TB(raid). When I restarted: sd[abcd] - 2TB(raid), sde - 500GB(OS).
[Code]...
View 3 Replies
View Related
Aug 21, 2010
I HAD a fedora 11 server with md RAID 1 across two 1TB SATA drives. The md0 space was set up to be an LVM PV and the single LVM VG was carved up into 5 or 6 LVs. The MB on this system died and I wound up buying a new one.
Now I want to recover the data from the RAID1 setup on the new server. However, when I attach the two 1TB drives to a new fedora 13 setup, mdadm is only able to find one of the two drives. The partition on the second drive shows "busy" during an mdadm -A -s -v to scan for md volumes.
Well, one drive should be enough since this is RAID1, right? Well, when I do a pvscan -v, the other drive shows up as a "NEW" pv not allocated to a VG. In addition, vgscan does print "Invalid metadata header checksum" when it runs but it doesn't point at any particular PV. I'm afraid to go any further with LVM since I can't afford to lose the data on this system. It is backed up offsite, but the restore will take several days and I can't afford to be down that long.
Are there any tools or techniques where I can dig deeper into what each drive, in the RAID1 pair, has right and wrong with it and pick one that I can force into a usable VG so that I can recover the data?
View 2 Replies
View Related
Mar 2, 2011
a server that was running a hardware isw raid on the system (root) disk. This was working just fine until I started getting sector errors on one of the disks. So, I shutdown the system and removed the failing drive and installed a new drive (same size). On reboot I went in to the intel raid setup and it did show the new drive and I was able to set it to rebuild the raid. So, continuing the reboot everything came up just fine except the raid 1 on the system disk. I have tried many times to get the system to rebuild the raid using dmraid, but to no avail it would not start a rebuild. In order to get the system back up and make sure that the disk was duplicated I was able to 'dd' the working disk to the new disk that was installed.At present when I look at the system it does not show up with a raid setup on the system disk ( this comprises the entire 1TB disk with w partitions sda1 as / and sda2 as swap).Problem:I have decided to forego the intel raid and just use mdadm. I have a test system setup to duplicate (not the software, but the disk partitions) the server setup.
Code:
[root@kilchis etc]# fdisk -l
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
View 12 Replies
View Related
Sep 10, 2010
I have a 7-drive RAID array on my computer. Recently, my SATA PCI card died, and after going through multiple cards to find another one that worked with linux, I now can't assemble the array. The drives are no longer in the order they were in previously, and mdadm can't seem to reassemble the array. It says there are 2 drives and one spare, even though there were 7 drives and no spares. I know for a fact that none of the drives are corrupted, because one of the non-working RAID cards was still able to mount the array for a short period, but would loose the drives during resyncing (I later found out that the chipset on the card was had extremely limited linux support). I have tried running "mdadm --assemble --scan" and after the drive is partially assembled, I add the other drives with "mdadm --add /dev/md0 /dev/sdc1". These both return errors and will not complete on the new raid card.
Code:
aaron-desktop:~ aaron$ sudo mdadm --assemble /dev/md0
mdadm: /dev/md0 assembled from 2 drives and 1 spare - not enough to start the array.
[code]....
View 4 Replies
View Related
Mar 30, 2011
I know I can simply create a degraded raid array and copy the data to the other drive like this: mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
But I want the specific disk to keep the raw ext3 filesystem so I can still use it from FreeBSD. When using the command above the disk will be a raid disk and I can't do a mount /dev/sdb1 anymore. A little background info. The drives in question are used as backup drives for a couple of Linux and FreeBSD servers. I am using the Ext3 filesystem to make sure I can quickly recover the data since both FreeBSD and Linux can read from that without problems. If someone has a different solution for that (2 drives in raid 1 that are readable by FreeBSD and writeable by Linux),
View 1 Replies
View Related
Feb 23, 2010
I'm renting a dedicated server with a company that claims that the server has 2 hard drives in a software RAID 1 array, but I need to make sure that the server really has the 2 HDD, and the size of the 2nd drive... how to do that ?? system is Centos 5.3
View 1 Replies
View Related
Dec 16, 2009
Got a little problem after the install of Fedora 12. First there was not problem in opening the raid-device, after i tryed to automount it with crypttab and fstab im not longer able to open it.
Here some outputs code...
View 2 Replies
View Related
Feb 2, 2010
Recently, one the SMART utility said that one of the drives had failed and another drive was about to fail. I downed the box and hooked them up to my windows machine to run sea tools on them (They are all seagate drives). Sea Tools said that the drives were fine, while ubuntu said they were failing/dead. Yesterday I decided to try to fix one of the drives in the raid. I turned the server off, took the failed drive out, and restarted. Of course the raid didn't work because only 2 of the 3 drives were there, however it had been working w/ only 2 of the 3 drives for a couple months now (I'm a lazy college student). I turned it back off and back on with the drive there just to see if I could get the raid up again, but I havn't been able to get it to go. So far I've tried:
Code:
mdadm --assemble /dev/md0 /dev/sd[b,c,d]
mdadm: no recogniseable superblock on /dev/sdb
mdadm: /dev/sdb has no superblock - assembly aborted
[code]....
I'm looking for a way to trick the raid into working with just 2 drives until I can warranty the seagate and buy an external 1.5 TB drive to use as another backup. how to remove the bad drive from the array and replace it with a fresh drive, without data loss.
View 3 Replies
View Related
Jun 7, 2010
I just had a whole 2TB Software RAID 5 blow up on me. I rebooted my server, which i hardly ever do and low and behold i loose one of my raid 5 sets. It seems like two of the disks are not showing up properly.. What i mean by that is the OS picks up the disks, but it doesnt see the partitions.
I ran smartct -l on all the drives in question and they're all in good working order.
Is there some sort of repair tool i can use to scan the busted drives (since they're available) to fix any possible errors that might be present.
Here is what the "good" drive looks like when i use sfdisk:
Quote:
sudo sfdisk -l /dev/sda
Disk /dev/sda: 121601 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/sda1 0+ 121600 121601- 976760001 83 Linux
/dev/sda2 0 - 0 0 0 Empty
[Code]....
View 2 Replies
View Related
Jun 18, 2010
I have a fileserver which is running Ubuntu Server 6.10. I had a RAID5 array consisting of the following disks:
Code:
/dev/sda1
/dev/sdb1
/dev/sdd1
/dev/md0 -
the raid drive for the above three disks. The sda1 disk has failed and the array is running on 2 of 3 disks
/dev/sdc (OS disk)
/dev/sde (new 2tb disk - unused)
/dev/sdf (new 2tb disk - unused)
My plan was to rebuild the array using the two new disks as RAID1. Would the best way to do this be to create a new RAID1 disk on /dev/md1 then copy all data over from /dev/md0? Also - this may sound stupid but since all 3 drives in md0 are identical i'm not sure physically which disk is bad. I tried disconnecting each disk one-by-one then rebooting but the system doesn't appear to want to boot without the bad drive connected. I've already failed the disk in the array with mdadm but i'm unsure of how to remove it properly.
View 3 Replies
View Related
Oct 6, 2010
Can I use UUIDs to setup a raid with mdadm?
View 3 Replies
View Related