Note: This how-to assumes you are using LVM on top of the RAID5
Example scenario:
VG0 <-> md0 = RAID5(3x500GB) = sda1, sdb1, sdc1
Volume Group VG0 is made up of a single Physical Volume (md0), which is made up of 3x500GB hard drives in
I am running lucid and have a 4+1(spare) RAID5 array made up of 1TB disks. I upgraded my mdadm to version 3.1.4 and then performed the following operation:
I have a 500GB drive mounted at /var/lib/mysql which is mostly empty and not part of any RAID array.The reshaping started and everything looked OK. The access lights on the 5 drives were all coming on at the same time on a regular basis. The status from /proc/mdstat showed the array being reshaped to RAID6, albeit slowly. The status showed an average speed of 4000KB and an estimated completion time of 4000 minutes. This all seemed reasonable. This was performed in late afternoon.
The next morning I checked the status and the average speed was down to 300->400KB and the estimated time to complete was 40,000 minutes. When I look at the drive lights, I have one drive whose access light is on solid and the other four drives come on intermittently. Running iotop doesn't show anything useful. mdadm and kjournal show up occasionally. The same is true for top (running on an i5 2500K Intel processor). Here is the output of cat /proc/mdstat:
I've been playing with this for hours, and have been unable to figure it out. I tried to convert my RAID5 array of 4 active disks and 1 spare to a RAID6 with 5 active disks.
I did this:
Code: mdadm --grow /dev/md4 --raid-devices 5 --level 6 Here is what I have on /dev/md4:
Code: /dev/sde1 active /dev/sdg1 active /dev/sdj1 active /dev/sdf1 active removed /dev/sdh5 spare code....
but it tells me that /dev/sde is busy, and then that it has a bad superblock (From what I've read, I'm sure the bad superblock is just because of the "busy" message). I've tried this with the -f option, too, with no luck.
I have a RAID 5 array, md0, with three full-disk (non-partitioned) members, sdb, sdc, and sdd. My computer will hang during the AHCI BIOS if AHCI is enabled instead of IDE, if these drives are plugged in. I believe it may be because I'm using the whole disk, and the AHCI BIOS expects an MBR to be on the drive (I don't know why it would care).
Is there a way to convert the array to use members sdb1, sdc1 and sdd1, partitioned MBR with 0xFD RAID partitions?
I have no drive failures but just need to recreate a raid5 set as the next free MD disk number. Originally I built a temp OS of debian on a single drive and had 4x2TB drives in a raid5 software array (MD0) this worked fine and allowed me to move all data to it, and remove our old fileserver. I have now pulled out the 4 x 2TB Raid 5 drives and created a new OS on two new 80GB drives, partioned as follows,
MD0 is now 250mb Raid1 as /boot MD1 is 4GB Raid1 Swap MD2 is 76GB Raid1 as /
If I turn off and push back in the 4x2TB drives I cannot see a MD3. I presume I would need to create a MD3 from these 4 drives but I dont want to mess things up as its live data. So im here asking for help, or a bit of hand holding to get it done right.
PS - Its a Debian Lenny 5.0.3 Raid1 fresh install replacing a Debian Lenny 5.0.3 on a single disk.
i`ve been learning a lot from your Decent site long ago, a long with my Linux Self-Study , i`m a MS System-Admin with 6 months Linux experience and growing ..
## My Situation ## :-
-CentOS 5.6(Final) x86_64 2.6.18-238.9.1.el5xen, Running as Backup-Server. -4 HDD, 1x500GB , 3x1TB -2 Raid Arrays (/dev/md0)-RAID1, (/dev/md1)-RAID5 -/boot on (/dev/md0)-RAID1 using ( /dev/sda1, /dev/sdb1)
I have just bought 4 1TB drives and set up a Software Raid level 5. Using Disk Utility tool I have created a GPT partition table and now when I want to create a partition, I get:
Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/md0, start=0, size=3000610848768, type= Entering MS-DOS parser (offset=0, size=3000610848768) MSDOS_MAGIC found found partition type 0xee => protective MBR for GPT
[Code].....
it does say nothing about creating an partition on the /dev/md0, although Gnome Disk Utility allows to do that - if I just run mkfs.xsf /dev/md0 - it works fine, yet Disk Utility tells me that disk has not been partitioned, see image:
I am trying to build a media server for my home and still in the process of evaluating my OS options (Ubuntu Server, Fedora Core, or Win Server). I am planning to use four 1TB drives initially for the RAID5 array. Once it fill up i will add more 1TB drives.
My question is can Fedora Core create a RAID5 array and grow latter without having to back up data to external hard drive and re-create the array? I am looking for something that is easy to use and manage. If Fedora Core doesnt have this option, can you recommend other distributions that can do this?
I am trying to build a media server for my home and still in the process of evaluating my OS options (Ubuntu Server, Fedora Core, or Win Server). I am planning to use four 1TB drives initially for the RAID5 array. Once it fill up i will add more 1TB drives.
My question is can Fedora Core create a RAID5 array and grow latter without having to back up data to external hard drive and re-create the array? I am looking for something that is easy to use and manage. If Fedora Core doesnt have this option, can you recommend other distributions that can do this?
I had FC 14 installed on an SSD, and 4 500Gig drives in a software RAID-5 configuration. However, just recently, my FC14 failed horribly. Fortunately my admin had recently backed up the /etc directory. When FC14 failed, he reinstalled FC14 on the SSD. Is there any way for me to re-establish the RAID-5 drives, since they were not affected?
Ok first off the write speeds are off the hook, 210MB/s on 5400RMS disks (5 disks in a RAID 6). However, read speeds are 68MB/s. I wondering first off, why? and secondly could this be an indicator of something not properly setup that might cause harm to my disks?
I'm running debian and used mdadm to setup up a raid 6 array with 4x1TB drives with roughly 1.86TB's available with lvm. Then I added 4x1TB drives to the array. So now I have an 8 drive raid 6 array with 5.+TB's available, the array sees all available space. The question is how do I extend the volume group so that it uses the whole raid and not just half of it. As of right now the volume group is only 1.86TB's.
Here's the deal: I had a nice little fileserver running under 2.6.27.21-170.2.56.fc10.x86_64.
3 disks in raid 5 ext4fs, then I thought..."hey I'm a greedy bastard..I want another drive!!
So I get it..do a normal mdadm --grow...after around 1100 minutes .. FINISHED!..whee..happy...
I decide to do a upgrade to 2.6.29.2-52.fc10.x86_64 to get the fix for growing the ext4..
reboot...
Code: md: bind<sda3> md: sdc3 has same UUID but different superblock to sda3 md: sdc3 has different UUID to sda3 md: export_rdev(sdc3) md: sdd3 has same UUID but different superblock to sda3
I'm currently in the process of getting my server moved over from a Server 2003 machine to one based on Fedora 12. My issue is that I have an existing RAID5 array on a rr1740 card. When I install this in the new system each individual drive show up as sdc sdd etc not as one volume. I have tried installing the highpoint driver but I get an error that sata_mv cannon't be unloaded. I have tried adding this to /etc/modprobe.d/blacklist to no avail.
System currently looks like this. ASUS mini ITX Atom D510 based board with 2x on board SATA, attatched to these are a 160GB OS drive, 400GB data drive. Highpoint RR1740 PCI card with 5x500GB drives in RAID5.
I am able to retrieve data off a RAID5 array. History: My system disk was partitioned to 10 gigs and over time was filled up to 100% locking me out of the GUI (I am a casual user). Now that I am relegated to the CLI, I need help to see if rebuilding the array and keeping the data intact is possible. I have three sata hard drives in the array and now it appears during the boot process, it fails and can't find the array.
I did the following: mdadm -D /dev/md0
and here's some of the output:
State: active, FAILED, Not Started
And out of three drives, it states that two have failed. I find this hard to believe as I have had no issues until my system disk was full. A coworker today helped me find some files to delete and now have the system disk down to 97%/8.8 gigs yet I can still not enter the GUI due to the array issue.
Out of the three discs, here are the results (taken also from mdadm -D /dev/md0):
Number Major Minor RAID device STATE 0 8 1 0 active sync /dev/sda1 1 0 0 1 removed 2 0 0 2 removed
Sorry for providing so little, but I am in (Repair Filesystem) mode and only have local access to the machine meaning all outputs will need to be retyped.
I am able to do anything to the box as its only purpose was media storage and serving.
I'm currently in the process of getting my server moved over from a Server 2003 machine to one based on Fedora 12. My issue is that I have an existing RAID5 array on a rr1740 card. When I install this in the new system each individual drive show up as sdc sdd etc not as one volume. I have tried installing the highpoint driver but I get an error that sata_mv cannon't be unloaded. I have tried adding this to /etc/modprobe.d/blacklist to no avail. System currently looks like this. ASUS mini ITX Atom D510 based board with 2x on board SATA, attatched to these are a 160GB OS drive, 400GB data drive. Highpoint RR1740 PCI card with 5x500GB drives in RAID5.
I have a 12 disk raid 6 Array with 2 additional spare drives. Two of the drives got out of sync (because I was fat fingering the mdadm commands while trying to reassemble the array), so I added them back as spares. This is what mdadm is showing me:
I have an 8x1TB raid6 array that I finally got back to a good state (see my other post here: [URL]..This is on a 9.04 server Now I can assemble the array no problem, but mounting is the issue. I think the reason might be because the array order changed. In the process of recovering I removed the /dev/md0 array and created a new array. In the create array command it told me:
Code: mdadm: /dev/sdb1 appears to be part of a raid array: for each device in the array. I confirmed that I wanted to continue and the array was recreated. I think this overwrote the superblock, but I'm not sure.
I am currently having problems with my RAID partition. First two disks were having trouble (sde, sdf). Through smartctl I noticed there were some bad blocks, so first I set them to fail, and readded them so that the RAID array will overwrite these. Since that didn't work, I went ahead and replaced the disks. The recovery process was slow and I left things running overnight. This morning I find out that another disk (sdb) has failed. Strangely enough the array has not become inactive.
Does anyone have any recommendations as the steps to take ahead with regards to recovery/fixing the problem? The disk is basically full so I haven't written anything to disk in the interim of this problem.
I have an old Athlon XP 3000 machine that I keep around as a file server.It's currently got three 1TB drives which I had setup as mdadm raid 5 on FC10. The machine's original drive held the superblock for the raid array and it just had a massive heart attack. I've searched, my biggest source being URL...I can't tell if I can reassemble the superblock info lost with the original hard drive or if I've lost it all...
I installed 10.04 on a new machine and 'tried' to create a 4 disk RAID6 array on four new 4TB drives.The build seemed to go fine, but a check on the new dev shows the following:
/dev/md0: Version : 00.90 Creation Time : Sun May 30 21:53:11 2010
1/4 of my drives died after about 3 years of usage. I replaced it with an identical drive and did a mdadm -add to re-add it to the array. I expected this to take quite a long time, but not more than 1 million minutes to complete!
For backup purposes, I built a RAID6 array on USB. When during the night the backup filesystem is being mounted, every so many days, a random disk fails:
Code: sd 9:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE sd 9:0:0:0: [sdb] Sense Key : Illegal Request [current] sd 9:0:0:0: [sdb] Add. Sense: Invalid field in cdb[code].....
In other words, my RAID set fails every so many mounts, but I know how to fix it.What I want to find out is:
- Why is a random drive failed every now and then?
- How can I prevent that drive from being failed?
BTW: I have a second RAID set that has been functioning for years without error, so the setup I use must be correct. The only difference between both RAID arrays is the different vendor of the disks.
To test stability, I rebooted the system, but on reboot, the array wasn't assembled correctly. Basically it had one device in "md_d2000", as a spare. So I stopped that device with
I don't have any important data on the array yet ... so I zero'd the superblocks on all devices, deleted the partitions, and started over .. here I go again:
I have Ubuntu 9.04 and just installed Sound Converter. I am trying to convert a bunch of .ogg files to mp3 to play on my iPod and it's not working so well. In the Sound Converter options I have is set to convert to high quality mp3. I choose the folder that the files are in and after a moment (slow laptop) Sound Converter populates, I hit 'convert' and it shows that the conversion completes in two seconds. All that it did was create the new folder structure of artist/album but there is nothing in there. Not sure what I am missing. I used Sound Converter before and it worked fine.
I'm trying to use convert, I have installed the imagemagick. I use this line:convert *.jpg test.pdf but I'm only able to convert to pdf 1 single jpg file, not multiple files at once. When there's more than one file, I get the following error: Segmentation fault