General :: Mount Hard RAID5 Array With Software RAID0 And GPT?
Aug 8, 2009
I couldn't post in General. It said I had insufficient permissions to post there, so, this post does have to do with Windows slightly. Sorry that it's here, but I DID read the rules (I searched, and couldn't find an answer to my problem either)
Anyways, I have a RAID5 array 2.72TB (4x1TB drives) which I used in my windows installation, initialized as GPT, and I used "span" to make the single 2TB partition, and 720GB partition into one partition. I believe that Windows created a software RAID0. Ok, so now I've made the leap away from windows, and am going 100% into Linux (Debian, to be exact) and I'm trying to figure out how to mount this array. I've only done basic web/ftp/ircd server management on Linux before, and never anything with mounting drives. I'm a complete n00b at this stuff.
View 9 Replies
ADVERTISEMENT
Dec 23, 2010
I have a RAID 5 array, md0, with three full-disk (non-partitioned) members, sdb, sdc, and sdd. My computer will hang during the AHCI BIOS if AHCI is enabled instead of IDE, if these drives are plugged in. I believe it may be because I'm using the whole disk, and the AHCI BIOS expects an MBR to be on the drive (I don't know why it would care).
Is there a way to convert the array to use members sdb1, sdc1 and sdd1, partitioned MBR with 0xFD RAID partitions?
View 1 Replies
View Related
Jul 17, 2009
I have a 9x320G RAID5 array that I am migrating over to a 3x1.5T RAID5 array.Intermittently, a drive would drop out of the older array and it would automatically start rebuilding. I thought it was a bad cable or controller somewhere, so when I bought the three new drives, I bought a new controller for them all, too. I'm running both arrays side by side until I'm happy the new hardware is stable (one drive was DOA). Then I noticed one morning that both arrays were rebuilding themselves. This was in /var/log/messages:
Quote: Jul 5 00:30:19 mnemosyne -- MARK --
Jul 5 00:50:19 mnemosyne -- MARK --
Jul 5 01:06:02 mnemosyne kernel: md: syncing RAID array md0
[code]....
View 4 Replies
View Related
Aug 10, 2010
I need to restore a superblock on a RAID5 software array. But I'm not sure if I'm meant to restore it from MD0 or a device such as SDA1? From what I read, superblocks are stored on each drive, but I'm not sure if this is changed when a software raid is in use.
View 1 Replies
View Related
Jun 7, 2011
I recently upgraded a server from Fedora 6 to Fedora 14. In addition to the main hard drive where the OS is installed, I have 3 1TB hard drives configured for RAID5 (via software). After the upgrade, I noticed one of the hard drives had been removed from the raid array. I tried to add it back with mdadm --add, but it just put it in as a spare. I figured I'd get back to it later.Then, when performing a reboot, the system could not mount the raid array at all. I removed it from the fstab so I could boot the system, and now I'm trying to get the raid array back up.
I ran the following:mdadm --create /dev/md0 --assume-clean --level=5 --chunk=64 --raid-devices=3 missing /dev/sdc1 /dev/sdd1I know my chunk size is 64k, and "missing" is for the drive that got kicked out of the array (/dev/sdb1).That seemed to work, and mdadm reports that the array is running "clean, degraded" with the missing drive.However, I can't mount the raid array. When I try:mount -t ext3 /dev/md0 /mnt/fooI get:
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
[code]....
View 1 Replies
View Related
Feb 20, 2011
Just got 3 extra drives for my machine.
2x160Gb sata
1x 165Gb sata
How do i create a raid0 array of these drives? On each drive i have got a partition whose size is 160GB and formatted to type fd (Raid Autodetect) i have tried the following:
Code:
mdadm --create /dev/md0 --level=0 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sda1
mkfs.ext4 /dev/md0
mount /dev/md0 /media/raid
but for some reason it doesnt work.
[Code]...
View 5 Replies
View Related
Dec 1, 2010
When I set up Ubuntu 10.10 I had only one hdd around so I installed my system with the idea that I will add the 2nd hdd for raid1 later on. Last weekend I wanted to add the hdd, but discovered, that ubuntu created a raid0 array. So I went on and tried different things: removing the 1st hdd from the raid0 array, create a raid1 with two disks, and so on... I finally could syncronize both disks but after a reboot the raid0 array appeared again with only one disk. Now I know, I should have written the mdadm.conf and fstab files... My last tries resulted in a missing superblock. Here is the story:
[Code].....
View 1 Replies
View Related
Feb 1, 2011
I currently have two hard drives, with my root partition configured in RAID0. I'd like to add two additional hard drives, and include them in my RAID0 array. I need to recreate the array to do this, so I'd like to copy everything off of the existing array, add the drives, build the new array, and copy everything back. I have an external hard drive with four times the capacity of my current array. What would be the best way to copy this data so that nothing is missed, so I can just copy everything back and boot back up? dd, image the entire root partition, mount it after creating the new array, copy everything back (at the filesystem level) dd, image the entire root partition, write it back out to the new array, I'm not sure how this would work, because the partition will be the wrong size, I don't know much about dd. rsync, just rsync everything on root over to the external something else? I plan on booting to a live CD and mounting my current array there, so I won't be working on a live filesystem.
View 1 Replies
View Related
May 24, 2011
I'm looking to shrink my windows partition on a raid0 array and create a mdadm ubuntu partition using raid0. Is this possible? can I just ignore the /dev/mapper device and use the standard /dev/sdx devices?
View 4 Replies
View Related
Mar 24, 2009
I am currently running Ubuntu 8.10 amd64 on a single 750GB drive. My question is whether or not I can add two more drives (say 2 500GB drives) in a raid 0 array leaving the OS and data on the original drive? I find my self working with some very large files and I would like to speed up their processing time. At the same time, I would like to leave my current configuration in place without touching the OS or my data. I'm not much of a hardware guy.
View 4 Replies
View Related
Jun 10, 2011
I am trying to build a new array after adjusting TLER on my disks, which permanently changed some of the drives sizes. I am not sure if the following inconsistencies are related to the newly mismatched drive sizes.
Using:
Code:
mdadm --create --auto=md --verbose --chunk=64 --level=5 --raid-devices=4 /dev/md1 /dev/sdd /dev/sde /dev/sdf /dev/sdg
Nets me (build-time was two full days):
[Code]....
On a side note, since I'm recreating my array from scratch, I was wondering if anyone here knows of any optimized settings I could use. I've got 3Tb of data to transfer, so lots of test material.
These are Western Digital First Generation 2TB Green Drives (WD20EADS-00R6B0) with WDidle3 fix applied & TLER=ON. These are pre Advanced Format (aka not 4K).
Code:
mkfs.ext4 -E stripe-width=48,stride=16 /dev/md1
View 9 Replies
View Related
Jun 15, 2011
I'm a bit at a loss on this one. I couldn't get a drive from a former RAID5 array to format. I did a dd to write zero's to the drive and attempted to fsck only to be stopped every time with the error: Couldn't find ext2 superblock, trying backup blocks.. fsck.ext3: Bad magic number in super-block while trying to open /dev/sda1
Smartctl shows no problems with the drive (a Seagate 750GB), but I haven't removed it and thrown it in a windows machine to do seagates proprietary drive diagnostics yet. Running Centos5.6 .I've never had this problem before. The drive is not mounted and the old md device has been removed as far as I can tell. It could still be attempting to assemble the RAID5 with the 1 drive, but I didn't see it attempt to do so.
[Code]...
View 3 Replies
View Related
Oct 29, 2010
I've had software RAID 5 arrays for a while now, so they were set up before a RAID array could be partitioned. I had two separate RAID 5 arrays on the same set of drives. One was for / and the other for /home. I moved the / to an SSD and figured I'd expand the other RAID array by failing a drive, repartitioning it then adding it back in. After repeating for the remaining drives, I could then expand the RAID array to use the full size of the drives.
Partway through the second drive being added back in, the RAID array stopped with a kernel error. The drive I was adding and another drive both showed as failed. I couldn't restart the array so I copied the failed drive (Seagate's SeaTools did show it as faulty, but without SMART being tripped) to a new one and tried again. dd_rescue reported the drive copied correctly but I still couldn't restart the array.
So I tried the old standby of recreating the array. This allows me to start it but the ext3 file system won't mount. So I then tried my script (listed in another thread) to try every combination of drives to assemble the array and mount the file system. Still no luck.
View 2 Replies
View Related
Mar 19, 2011
Yesterday I created a raid5 array /dev/md0 consisting of 5 harddisks, named sda thru sde on the time of creation.After that I stored some data into the arry without any difficulties, then shutdown the computer.Early this morning when starting the computer I got a message that /dev/md0 was not ready to be mounted.So I checked the raid array and discovered that the enumerator had been messing with the harddisks.
Harddisk sda was now sdc etc. etc.After I rebooted, the harddisks got the original names again: sda was sda again.When I mounted the array no problems occurred.So, it seems that the order in which the harddisks are enumerated influences the availability of the raid array. Is there a way to avoid this kind of problems with a raid array?
View 4 Replies
View Related
Mar 6, 2011
I wanted to extend my raid array with one disk, but I made a major error. I forgot partition the new disk to utilize the full 640GB. I used the following commands to extend the array:
Code:
mdadm --add /dev/md0 /dev/sdf
mdadm --grow --raid-devices=6 /dev/md0
xfs_growfs /dev/md0
After noticing that something was wrong I used these commands to remove the new disk:
[Code]....
How can I repair this situation? Before starting this adventure I made a back-up of everything that was stored in the raid array.
View 1 Replies
View Related
May 30, 2011
I am running lucid and have a 4+1(spare) RAID5 array made up of 1TB disks. I upgraded my mdadm to version 3.1.4 and then performed the following operation:
$sudo mdadm --grow /dev/md3 --level=6 --read-device=5 --backup-file=/var/lib/mysql/md3backup
I have a 500GB drive mounted at /var/lib/mysql which is mostly empty and not part of any RAID array.The reshaping started and everything looked OK. The access lights on the 5 drives were all coming on at the same time on a regular basis. The status from /proc/mdstat showed the array being reshaped to RAID6, albeit slowly. The status showed an average speed of 4000KB and an estimated completion time of 4000 minutes. This all seemed reasonable. This was performed in late afternoon.
The next morning I checked the status and the average speed was down to 300->400KB and the estimated time to complete was 40,000 minutes. When I look at the drive lights, I have one drive whose access light is on solid and the other four drives come on intermittently. Running iotop doesn't show anything useful. mdadm and kjournal show up occasionally. The same is true for top (running on an i5 2500K Intel processor). Here is the output of cat /proc/mdstat:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sde3[4](S) sda3[3] sdc3[1] sdd3[2] sdb3[0]
987904 blocks [4/4] [UUUU]
[code]....
My biggest concern is keeping this system running for 20+ days without any hiccups.
View 1 Replies
View Related
Jun 26, 2009
I have looked thru the forums and I am not sure if LSI 8204ELP definitely works with Centos 5.3 or 5.2 or 5.1 or not. Can anyone who has had a positive experience with this hardware combination give some feedback etc. mobo is supermicro c2sbx [URL]
View 2 Replies
View Related
Sep 12, 2009
Ok, as the title indicates I have a RAID5 array with 4 500GB SATA drives. This is the only drive configuration on the system (i.e. the OS also resides in the RAID array). I'm running CentOS 5 and need to know how to go about increacing the space in the RAID array by replacing the drives with 4 1TB drives.
View 3 Replies
View Related
Jun 7, 2010
I'm a bit sick to my stomach right now. I had a raid5 array (5x1.5TB drives) and I upgraded to lucid and now the array no longer works. Initially, on boot, it would try to mount it from fstab and that failed consistently as it wasn't assembling it.
then I tried to assemble it by hand (--scan) and that seemed to cause it to mount degraded (it seems md in the process tried to use on of the disks for something else!), but when I look at its partition table, it's empty. pretty pissed at the moment (somewhat at myself, didn't really need to upgrade), any ideas what went wrong?
View 2 Replies
View Related
Jul 27, 2010
after a failed upgrade from 9.10 to 10.04 I had to format my computer and do a clean install of 10.04, and now my mdadm raid5 array wont start.my array is called "The Library", and i believe the space between "The" and "Library" is causing the command disk utility uses to start the array to fail.The exact error isAn error occurred while performing an operation on "The Library" (RAID-5 Array): The operation failed
Error assembling array: mdadm exited with exit code 1: mdadm: unrecognised word on ARRAY line: Library
mdadm: unrecognised word on ARRAY line: Library
[code]....
View 1 Replies
View Related
Sep 16, 2010
This isn't exactly Ubuntu specific, but I do plan on using Ubuntu to try to recover this array. I've been using a Freedom9 freestor 4020 for the past few years and other than it totally blowing up last week it's been pretty good. I was on vacation for almost a month and a few days after I returned my NAS (freestor 4020) started acting up. I tried a few power cycles, but was dismayed to see that I could not log in via browser or SSH (SMB shares were no accessible either). A drive failure light is supposed to illuminate if a disk fails, but no dice.
I plugged all 4 drives from the NAS into an Ubutnu 9.04 Desktop system and one started throwing out all kinds of errors. Thinking that it would be a simple rebuild, I went to my local computer shop and picked up another 500GB drive (same manufacturer/part #), replaced the failed drive, and powered up the NAS again... Nothing. I left it for 12 hours then powered it down, plugged the new drive into my linux box again to see if it rebuilt... the drive was a virgin. What gives me hope that I can still recover the data is Ubuntu sees "RAID components" on the drives (through disk manager and parted), and gives me the option of initializing the array.
My plan of attack is to plug all of the drives back into my Ubuntu box, initialize the RAID array via LVM, and pretty much hope for the best. The data is not uber critical, but it would be a pretty big pain in the behind to rip/upload all the software that was on it (ripping hundreds of DVD/CD images is not fun). If my Ubuntu box can make sense of this newly initialized/mounted RAID set... I'll plug in a 2TB external drive, copy the data over, and rebuild the NAS from scratch, then put my data back on (perhaps a different unit, or something running openfiler).
View 2 Replies
View Related
Jul 8, 2011
I have software raid 5 array, each time I reboot my server, I have to rebuild array again. Rebuilding array takes too long. I am using ubuntu server 10.10.
View 7 Replies
View Related
Feb 15, 2010
I have a problem with my mdadm RAID. I wanted to know if anyone had any experience with shrinking RAID5 arrays. I was growing the array from 5 to 6 devices however the grow got interrupted and it has recovered to 5 drives. The 6th drive is toast and I am unable to re add it to the system. I would like to drive the device listed as "removed". I have tried mdadm /dev/md0 --remove detached and failed with no success. I am running Ubuntu kernel 2.6.28-11 and mdadm is v3.1.1.
Here is output of a "mdadm -D dev/md0"
/dev/md0:
Version : 0.90
Creation Time : Wed Jan 12 00:46:41 2009
Raid Level : raid5
Array Size : 4883812480 (4657.57 GiB 5001.02 GB)
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Raid Devices : 6
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Feb 15 20:25:07 2010
State : active, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 74fa5199:84b88e81:4ae0fbae:92643084
Events : 0.1331010
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 0 3 active sync /dev/sda
4 8 64 4 active sync /dev/sde
5 0 0 5 removed
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb[0] sde[4] sda[3] sdd[2] sdc[1]
4883812480 blocks level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_]
unused devices: <none>
View 4 Replies
View Related
Feb 17, 2011
I have a CentOS 5 based Linux system with a 3Ware 9550SU RAID card and four 500GB drives set up in a RAID5 array (3 in the array and 1 hot spare).
I want to 'replace' these drives with four 2TB drives without data loss. My server case has a total of 8 drive bays all hot-swap and all attached to the RAID card, this means I have four empty drive bays on the RAID card.
One thought is to put the four new 2TB drives in the empty drive bays, configure them in a new RAID5 array. Then the question is now to I "mirror" the original RAID5 array over to the new one?
This is just a thought though, I am not sure it will work. In short my question to this forum is how do I accomplish this upgrade?.
View 4 Replies
View Related
Mar 23, 2010
I'm currently in the process of getting my server moved over from a Server 2003 machine to one based on Fedora 12. My issue is that I have an existing RAID5 array on a rr1740 card. When I install this in the new system each individual drive show up as sdc sdd etc not as one volume. I have tried installing the highpoint driver but I get an error that sata_mv cannon't be unloaded. I have tried adding this to /etc/modprobe.d/blacklist to no avail.
System currently looks like this. ASUS mini ITX Atom D510 based board with 2x on board SATA, attatched to these are a 160GB OS drive, 400GB data drive. Highpoint RR1740 PCI card with 5x500GB drives in RAID5.
View 3 Replies
View Related
Mar 31, 2011
I am able to retrieve data off a RAID5 array. History: My system disk was partitioned to 10 gigs and over time was filled up to 100% locking me out of the GUI (I am a casual user). Now that I am relegated to the CLI, I need help to see if rebuilding the array and keeping the data intact is possible. I have three sata hard drives in the array and now it appears during the boot process, it fails and can't find the array.
I did the following:
mdadm -D /dev/md0
and here's some of the output:
State: active, FAILED, Not Started
And out of three drives, it states that two have failed. I find this hard to believe as I have had no issues until my system disk was full. A coworker today helped me find some files to delete and now have the system disk down to 97%/8.8 gigs yet I can still not enter the GUI due to the array issue.
Out of the three discs, here are the results (taken also from mdadm -D /dev/md0):
Number Major Minor RAID device STATE
0 8 1 0 active sync /dev/sda1
1 0 0 1 removed
2 0 0 2 removed
Sorry for providing so little, but I am in (Repair Filesystem) mode and only have local access to the machine meaning all outputs will need to be retyped.
I am able to do anything to the box as its only purpose was media storage and serving.
View 14 Replies
View Related
Apr 26, 2009
I've got a RAID5 array that doesn't want to automount after rebooting. I'm pretty familiar with linux, RAID, and mdadm, and up until now, I've had the RAID5 array working just fine. However, whenever I reboot, the array drops off and won't remount until I manually assemble and then mount the thing. I find this odd because I had everything automounting just fine back in 10.3, and even in 11.0 (I think - not sure on that). Currently, things are working, but I'd really like to not not have to type
Code:
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
followed by
Code:
mount /dev/md0 /mnt/data every time I reboot. Even including this in some sort of start-up script seems kludgey... Surely there must be a more elegant way of automatically bringing up a RAID5 array after booting? I'm not sure what information you'll need, so I'm going to go ahead and include as much as I can anticipate...
So having already used the commands:
[Code]....
View 9 Replies
View Related
Jul 5, 2010
I also get sent to a Busybox (initramfs) shell with no text editor and don't know how to copy all the error messages and post them here. If there is a way, let me know. I've typed it out in the meantime:
Code:
md0 : inactive sdxxxx
Attempting to start the RAID in degraded mode...
mdadm: CREATE user root not found
mdadm: CREATE group disk not found
[Code].....
This is with a 3 disk RAID5 array. I turned off the system, pulled out a drive, and started it back up. Fresh install, all I've done so far is apt-get update and upgrade.
View 4 Replies
View Related
May 20, 2011
My box has a raid5 array (mdadm) with everything in it (/boot and /) but swap that is actually spread across the 4 drives. I had ubuntu 10.10 installed (amd64) with grub1, when I upgraded to natty (11.04) it automatically installed grub2. Well boot fails, it always goes to grub rescue no matter what happens. I've installed and reinstalled grub2, and boot always fails with:
"error: file not found".
In grub rescue I can see that md0 is actually available, an "ls" to (md0)/boot succeeds but the strange thing is that an "ls" to (md0)/boot/grub prints nonsense, as does an "ls" to (md0)/boot/usr/lib/grub/i386-pc/. When I try to load the required modules for boot (linux raid etc) in grub I also always get a "file not found error" (I fsck'd md0, which says everything's fine). I have installed the latest version of grub2 and executed grub-install in all four drives.
View 6 Replies
View Related
Mar 22, 2010
I'm currently in the process of getting my server moved over from a Server 2003 machine to one based on Fedora 12. My issue is that I have an existing RAID5 array on a rr1740 card. When I install this in the new system each individual drive show up as sdc sdd etc not as one volume. I have tried installing the highpoint driver but I get an error that sata_mv cannon't be unloaded. I have tried adding this to /etc/modprobe.d/blacklist to no avail. System currently looks like this. ASUS mini ITX Atom D510 based board with 2x on board SATA, attatched to these are a 160GB OS drive, 400GB data drive. Highpoint RR1740 PCI card with 5x500GB drives in RAID5.
View 3 Replies
View Related