Software :: RAID Created With 'mdadm' On /dev/md0, But Uses /dev/md3?
Mar 10, 2010
I have done lots of searching and I haven't been able to find anyone else with the same problem. Whenever I create a RAID with 'mdadm', regardless of level (I've done linear, 0, and 5) the command I use is:
Code:
mdadm --create --run --verbose /dev/md0 --raid-devices=11 --spare-devices=1 --chunk=256 --level=5 /dev/sd[abcdefghijkl]1
The RAID is build RAID 5 as it should be. However, when I check /proc/partitions it shows under "md3".
[Code]...
View 1 Replies
ADVERTISEMENT
Jul 18, 2011
I have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer!
So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.
View 2 Replies
View Related
Jul 19, 2011
This is the error message I'm getting when trying to Format the mdadm RAID5 created with 4 drives
Code:
Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/md1, start=0, size=6001196531712, type=
Entering MS-DOS parser (offset=0, size=6001196531712)
MSDOS_MAGIC found
found partition type 0xee => protective MBR for GPT
Exiting MS-DOS parser
[Code]...
View 3 Replies
View Related
Jan 17, 2011
Relatively inexperienced user using Linux/ubuntu. Not too savvy I admit and like to use GUI as much as possible. Not a great fan of the Terminal window... I have installed a couple weeks ago Ubuntu 10.10 (Desktop Edition) using Alternative install disk (don't ask why!) on 4Gb usb stick. Working fine except one thing with the raid array. I have created a raid5 array made of 6 drives using GUI (Disk Utility). After an expansion of the array (or was it a reinstall of the OS, I can't remember exactly?) the array does not autostart anymore. Of course nor does it automount anymore.
THE WEIRD THING is that I can still start it MANUALLY from the "Disk Utility" GUI after two tries. And it works just fine thereafter!!! The first time i try to start it gives an error (something about /dev/md0_127 being not ready or buisy). THE SECOND TRY ALWAYS WORKS like a charm, the array starts and i can mount it just fine. Here is a screenshot: I have also noticed that there is no entry in fstab for /dev/md0 although I can manually mount it using the same Disk Utility GUI. That is strange to me. Is it normal? i could easily add it manually but Ubuntu it won't boot anymore (i tried and failed, hence the reinstall). I tried for two weeks to find a solution browsing on different forums but the problem is beyond my expertise...
BELOW are further details about my configuration mdadm.conf, fstab, fdisk -l result and other info. I don=t want to loose my data but it would be nice to make this thing work and be able to access my fileserver via vnc instead of having to keep it connected to a lcd monitor as now. This is the blkid result:
[Code]...
View 3 Replies
View Related
Jul 11, 2010
I'm writing a monitoring plugin for a home server RAID, mdadm on Ubuntu 10.4. code...
I'm looking for the possible values of "state" but can't seem to find it anywhere, neither man nor the online documentation I have found seem to have a list.
Does anyone know where to find a list of possible states?
View 1 Replies
View Related
Jun 5, 2011
I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it.I then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says : Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output errormdadm: Notenough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:
root@warren-P5K-E:~# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
[code]....
View 2 Replies
View Related
Oct 21, 2010
I have a previously defined RAID5 (4 disks). This worked in Ubuntu 8.04.. I recently moved to CentOS5.. and I cannot seen to get the drive back online.cat /proc/mdstat shows, no raid levels (personalities).. and no drives listed.mdadm --detail -scan returns nothing.mdadm --QE returned a UUID string.. and the ARRAY output.I can mdadm --examine all the members of the original array.I am not versed in mdadm enough to really understand what I can run and should not run that would erase the data on the drives. Please assist.. I will try to post exact output of commands.. but the system is kind of unreachable and being rebuilt... i just want to ensure my data on the array is not lost
View 7 Replies
View Related
Aug 14, 2010
I'm running a Debian homeserver, with a 3-disk (1GB each) raid 5 array using mdadm (the OS is on a separate disk).Now, smartmontools noticed some bad sectors on one of the disks, and I'm not sure what to do next (except for backup of valuable data).I found some articles on how to fix these sectors, but I'm unaware what the result on the whole array will be.
View 4 Replies
View Related
Feb 20, 2011
My only goal is to have a raid-5 that auto-assembles and auto-mounts. Hardware: 4*2TB sata (raid disks), 1*500GB IDE (OS disk), 1*DVD IDE all plugged direct into the motherboard (nForce 750i SLI).
Starting partitions on the raid disks: gpt ext4 The problem occurs when I restart my comp after building it for the first time. I am able to see it assemble, I am able to partition it, I even mounted it Once.This is the second time I've built it so I have watched everything that happened. I don't know if this has anything to do with my problem, but when I created the raid my drive designations were: sda - 500GB(OS), sd[bcde] - 2TB(raid). When I restarted: sd[abcd] - 2TB(raid), sde - 500GB(OS).
[Code]...
View 3 Replies
View Related
Aug 21, 2010
I HAD a fedora 11 server with md RAID 1 across two 1TB SATA drives. The md0 space was set up to be an LVM PV and the single LVM VG was carved up into 5 or 6 LVs. The MB on this system died and I wound up buying a new one.
Now I want to recover the data from the RAID1 setup on the new server. However, when I attach the two 1TB drives to a new fedora 13 setup, mdadm is only able to find one of the two drives. The partition on the second drive shows "busy" during an mdadm -A -s -v to scan for md volumes.
Well, one drive should be enough since this is RAID1, right? Well, when I do a pvscan -v, the other drive shows up as a "NEW" pv not allocated to a VG. In addition, vgscan does print "Invalid metadata header checksum" when it runs but it doesn't point at any particular PV. I'm afraid to go any further with LVM since I can't afford to lose the data on this system. It is backed up offsite, but the restore will take several days and I can't afford to be down that long.
Are there any tools or techniques where I can dig deeper into what each drive, in the RAID1 pair, has right and wrong with it and pick one that I can force into a usable VG so that I can recover the data?
View 2 Replies
View Related
Mar 2, 2011
a server that was running a hardware isw raid on the system (root) disk. This was working just fine until I started getting sector errors on one of the disks. So, I shutdown the system and removed the failing drive and installed a new drive (same size). On reboot I went in to the intel raid setup and it did show the new drive and I was able to set it to rebuild the raid. So, continuing the reboot everything came up just fine except the raid 1 on the system disk. I have tried many times to get the system to rebuild the raid using dmraid, but to no avail it would not start a rebuild. In order to get the system back up and make sure that the disk was duplicated I was able to 'dd' the working disk to the new disk that was installed.At present when I look at the system it does not show up with a raid setup on the system disk ( this comprises the entire 1TB disk with w partitions sda1 as / and sda2 as swap).Problem:I have decided to forego the intel raid and just use mdadm. I have a test system setup to duplicate (not the software, but the disk partitions) the server setup.
Code:
[root@kilchis etc]# fdisk -l
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
View 12 Replies
View Related
Sep 10, 2010
I have a 7-drive RAID array on my computer. Recently, my SATA PCI card died, and after going through multiple cards to find another one that worked with linux, I now can't assemble the array. The drives are no longer in the order they were in previously, and mdadm can't seem to reassemble the array. It says there are 2 drives and one spare, even though there were 7 drives and no spares. I know for a fact that none of the drives are corrupted, because one of the non-working RAID cards was still able to mount the array for a short period, but would loose the drives during resyncing (I later found out that the chipset on the card was had extremely limited linux support). I have tried running "mdadm --assemble --scan" and after the drive is partially assembled, I add the other drives with "mdadm --add /dev/md0 /dev/sdc1". These both return errors and will not complete on the new raid card.
Code:
aaron-desktop:~ aaron$ sudo mdadm --assemble /dev/md0
mdadm: /dev/md0 assembled from 2 drives and 1 spare - not enough to start the array.
[code]....
View 4 Replies
View Related
Dec 16, 2009
Got a little problem after the install of Fedora 12. First there was not problem in opening the raid-device, after i tryed to automount it with crypttab and fstab im not longer able to open it.
Here some outputs code...
View 2 Replies
View Related
Feb 2, 2010
Recently, one the SMART utility said that one of the drives had failed and another drive was about to fail. I downed the box and hooked them up to my windows machine to run sea tools on them (They are all seagate drives). Sea Tools said that the drives were fine, while ubuntu said they were failing/dead. Yesterday I decided to try to fix one of the drives in the raid. I turned the server off, took the failed drive out, and restarted. Of course the raid didn't work because only 2 of the 3 drives were there, however it had been working w/ only 2 of the 3 drives for a couple months now (I'm a lazy college student). I turned it back off and back on with the drive there just to see if I could get the raid up again, but I havn't been able to get it to go. So far I've tried:
Code:
mdadm --assemble /dev/md0 /dev/sd[b,c,d]
mdadm: no recogniseable superblock on /dev/sdb
mdadm: /dev/sdb has no superblock - assembly aborted
[code]....
I'm looking for a way to trick the raid into working with just 2 drives until I can warranty the seagate and buy an external 1.5 TB drive to use as another backup. how to remove the bad drive from the array and replace it with a fresh drive, without data loss.
View 3 Replies
View Related
Jun 7, 2010
I just had a whole 2TB Software RAID 5 blow up on me. I rebooted my server, which i hardly ever do and low and behold i loose one of my raid 5 sets. It seems like two of the disks are not showing up properly.. What i mean by that is the OS picks up the disks, but it doesnt see the partitions.
I ran smartct -l on all the drives in question and they're all in good working order.
Is there some sort of repair tool i can use to scan the busted drives (since they're available) to fix any possible errors that might be present.
Here is what the "good" drive looks like when i use sfdisk:
Quote:
sudo sfdisk -l /dev/sda
Disk /dev/sda: 121601 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/sda1 0+ 121600 121601- 976760001 83 Linux
/dev/sda2 0 - 0 0 0 Empty
[Code]....
View 2 Replies
View Related
Jun 18, 2010
I have a fileserver which is running Ubuntu Server 6.10. I had a RAID5 array consisting of the following disks:
Code:
/dev/sda1
/dev/sdb1
/dev/sdd1
/dev/md0 -
the raid drive for the above three disks. The sda1 disk has failed and the array is running on 2 of 3 disks
/dev/sdc (OS disk)
/dev/sde (new 2tb disk - unused)
/dev/sdf (new 2tb disk - unused)
My plan was to rebuild the array using the two new disks as RAID1. Would the best way to do this be to create a new RAID1 disk on /dev/md1 then copy all data over from /dev/md0? Also - this may sound stupid but since all 3 drives in md0 are identical i'm not sure physically which disk is bad. I tried disconnecting each disk one-by-one then rebooting but the system doesn't appear to want to boot without the bad drive connected. I've already failed the disk in the array with mdadm but i'm unsure of how to remove it properly.
View 3 Replies
View Related
Oct 6, 2010
Can I use UUIDs to setup a raid with mdadm?
View 3 Replies
View Related
Feb 1, 2011
My home-backup server, with 8*2TB disks won't boot anymore. Two disks failed at the same time and i rebuilt the raid 6 array without any problem, but now i can't boot the os. I'm using ubuntu server, 10.10. I've made screens of the displays to don't copy everything here. The problem at the boot:
And the Grub config: It's not a production server, but i would like to have it online. I've tried for the lasts 2 days (just a couple hours a day) but without success. I was suggested to do "mount -o remount,rw /" and than edit /etc/fstab, but it get the file don't exist error.
View 2 Replies
View Related
Mar 25, 2011
I'm having trouble with Ubuntu 10.10 and stable device names. When I installed Ubuntu, the root drive was the only one in the machine; it obviously got /dev/sda.
After the base installation, I installed three additional 2TB drives to make RAID-5 array. Ubuntu renamed the root drive to /dev/sdd. While annoying I lived with it.
After creating a single partition set to "Linux raid autodetect" on each drive, I created the RAID-5 array:
Code:
All was going well until a reboot. When rebooting Ubuntu decided to make the root drive /dev/sda this time and now mdadm --detail /dev/md0 reports:
Code:
How to fix the array and make the device names stable?
View 1 Replies
View Related
Apr 7, 2011
I am trying to create a Raid 1 ram disk. Below are the commands I used:
[root@abidbodal dev]# mke2fs -m 0 /dev/ram8
[root@abidbodal dev]# mount /dev/ram8 /mnt/rd8
[root@abidbodal dev]# mke2fs -m 0 /dev/ram9
[code]....
View 3 Replies
View Related
Dec 21, 2010
I have been having some odd issues over the last day or so while trying to get a raid 5 array running in software under Kubuntu. I installed 3 1TB drives and started up, my sd* order got all messed up( sda was now sdc and so on). This wasn't entirely unexpected, so I fixed up fstab and booted again. I found all three of the drives I installed, set them to raid auto-detect and used mdadm to create /dev/md0. I then created mdadm.conf by piping the output of mdadm --detail --scan --verbose into /etc/mdadm.conf.At this point, everything was still going swimmingly. I copied over a few hundred GB of data from another failing drive and everything seemed ok. I went to reboot once the copy was done and everything just went weird. All of the sd* drives went back to the original. Of course, this meant that the mdadm.conf was wrong. I tried to just change the device list, but that didn't work. I then deleted mdadm.conf and rebooted. The drive list stayed in the original order this time, so I just tried manually starting the array.
By erasing the partition table of the 3rd drive, I've been able to get it to the status of spare, but it says it is busy when I try to add it to the array. A grep through dmesg makes me think that md has a lock on it. I'm not sure where to go with it now. If anyone has any pointers, I would like to hear them.
Device List(original):
/dev/sda => boot drive, /home /
/dev/sdb => 1.5TB media storage, failing
[code]...
View 1 Replies
View Related
Dec 4, 2010
I have an Linksys NSLU2 with four usb harddrives attached to it. One is for the os, the other three are setup as a RAID5 array. Yes, I know the raid will be slow, but the server is only for storage and will realistically get accessed once or twice a week at the most. I want the drives to spin down but mdadm is doing something in the background to access them. An lsof on the raid device returns nothing at all. The drive are blinking non-stop and never spin down until I stop the raid. Then they all spin down nicely after the appropriate time.
They are Western Digital My Book Essentials and will spin down by themselves if there is no access. What can I shutdown in mdadm to get it to stop continually accessing the drives? Is it the sync mechanism in the software raid that is doing this? I tried setting the monitor to --scan -1 to get to check the device just once, but to no avail. I even went back and formatted the raid with ext2 thinking maybe the journaling had something to do with it. There are no files on the raid device, it's empty.
View 12 Replies
View Related
Nov 16, 2009
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1and I getmd1: raid array is not clean -- starting background reconstructionWhy is it not clean?Should I be worried?The HD is not new it has been used in before in a raid array but has beenrepartitionated.
View 2 Replies
View Related
Jun 7, 2011
I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it. then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says :Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error
mdadm: Not enough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:
root@warren-P5K-E:~# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
[code]...
View 11 Replies
View Related
Feb 3, 2011
When we assemble a raid array, from where does it load configuration information for that array? I thought it refers to /etc/mdadm.conf file, but in my system, mdadm.conf file doesn't even contain all information. Still it is able to successfully assemble previously created device.
# cat /etc/mdadm.conf
DEVICE /dev/sd[bcdjkl]1
DEVICE /dev/loop[012345]
[code]...
View 2 Replies
View Related
Jul 27, 2011
I have a raid 5 array that appears to have died. I was just routinely looking at /var/log/messages and noticed that a drive in the array was complaining (via SMARTD).
This is the /home directory, and so is backed up, so it's not critical, but I'd like to get some things that changed after the last backup (the week before I noticed the failure)
Let me start by outlining what I know :
It's a 2TB array spread over three disks (mdadm software RAID5), here are the drives:
MDADM gives the drives :
Now, the array *WAS* up ok, but I umounted it. (in which /dev/md0 was mounted to /home) Yes,I know - I didn't want any changes being made to the array by anything - at least that was my thinking at the time. In hindsight... I would have killed any processes, locked out the server, backed up again and 'then' unmounted it.
But we are where we are, I'm sure there'll be time for recriminations later.
When I try to remount it, I get :
Ok - looks like it's lost the type - it's normally worked, maybe we'll give it a little hint - it's ext3 with a journal.
When I tell it it's an ext3, I get :
Now, before I go charging off specifying superblocks further along the disk, but I can't remember where they're stored.
Neither can I recall what the blocksize I originally created the array as (I have a feeling I specifed 4K, but I could be wrong).
debugfs is only telling me :
I should also point out that this server is hosted, so it's 150 miles away from me at the moment, so I can't just whip them out and dd a copy.
View 11 Replies
View Related
Jul 15, 2010
I've been having troubles with software raid. In particular, the raid array becomes un "assembleable" after reboots. The config is CentOS 5, 4 sata discs (one by 160 containing OS, no raid and 3 2TB disks configured as a RAID 5 array - no spare drive). These drives were configured in anaconda and all seemed to go well (the drive and its lvm partitions worked and it finished rebuilding overnight). A couple of reboots later the drives cannot be assembled anymore and the machine won't boot. The error message says:
mdadm: /dev/md0 assembled from 1 drive and 1 spare - not enough to start the array.
Of course there are 3 drives and no spares in the array as configured. Manually starting the array with mdadm --assemble --scan gives the same message as does assembling the drive by specifying the individual parts. /proc/mdstat does recognize the 3 drives and when I look at the partition tables in fdisk, they show as being software raid. What could be wrong or steps to diagnose? I tried configuring the raid drives manually before going the anaconda route. Also, does anyone know I can edit the /etc/fstab file to disable them so the machine will at least boot. The (Repair filesystem) shell has the / drive mounted r/o.
View 7 Replies
View Related
Mar 13, 2010
I just expanded my raid 5 array from 3*2TB to 4*2TB and mdadm made the grow successfully and shows an md0 dev with the size of 6TB usable data. Now my problem is that Debian (Lenny) dosnīt show the right amount. See below
######### MDADM DETAILS OF ARRAY ##########
> mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Mon Dec 14 22:30:46 2009
Raid Level : raid5
[Code].....
View 3 Replies
View Related
Oct 13, 2010
I received some errors while running a benchmarking script to determine my ideal raid chunk size. There are several errors in the kernel log regarding the sata link and eventually the two drives i have connected to a pci express x1 sata card were no longer present in /dev/
the script i was using is available here [URL]..system specs 1 500gb western digital drive (system drive)3 2tb samsung f4 drives (2 connected to pci x1 card (sata II) and 1 onto onboard sata port (sata I)) single core amd 64 on SiS chipset debian 64bit testing
[Code]...
I rebooted the machine and everything appears to be happy. What do these errors mean? What steps should i take to prevent them in the future so it doesn't end up corrupting the array?
View 1 Replies
View Related
Apr 22, 2011
I'm pretty new to FakeRAID vs JBOD in a software RAID, and could use some help/advice. I recently installed 11.4 on a brand new server system that I pieced together, using Intel RAID ICH10r on an ASUS P8P67 Evo board with 2500K Sandy Bridge CPU. I have two RAIDs setup, one RAID-1 mirror for the system drive and /home, and the other consists of four drives in RAID-5 for a /data mount.
Installing 11.4 seemed a bit problematic. I ran into this problem: [URL]... I magically got around it by installing from the live KDE version with all updates downloaded before the install. When prompted, I specified I would like to use mdadm (it asked me), however it proceeded to setup a dmraid. I suspect this is because I have fake raid enabled via the bios. Am I correct in this? Or should I still be able to use mdadm with bios raid setup?
Anyways, to make a long story short, I now have the server mostly running with dmraid installed vice mdadm. I have read many stories online that seem to indicate that dmraid is unreliable versus mdadm, especially when used with newer SATA drives like I happen to be using. Is it worth re-installing the OS with the drives in JBOD and then having mdadm configure a linux software raid? Are their massive implications one way or another on if I do or do not install mdadm or keep dmraid?
Finally, what could I use to monitor the health and status of a dmraid? mdadm seems to have it's own monitoring associated with it when I was glazing over the man pages.
View 9 Replies
View Related