Server :: Recreate Raid-5 Array Without Loosing Data
Dec 11, 2010
I rebuilt a server and am now trying to recover my large data arrays. The server was ubuntu 10.04lts before. I decided to rebuild it with CentOS simply because I am more familiar with it. I had 2 raid-5 arrays on the old server:4 x 1tb -> md0 5 x 2tb -> md1 The newly built server does not know about these arrays yet. How can I reassemble the arrays without loosing my data? I know the data can still be accessed because booting the server with a live-cd mounts and shows the arrays just fine. Should I boot with a live cd and copy the mdadm config file?
I had a fedora 11 on a IDE HDD and a raid 5 with 4 SATA HDD with a rocketraid card 1640 After a crash of my HDD, i try too install a centos 5 on a new HDD and the problem is to install a raid 5 without loosing my datas who are on the raid.
I have a software RAID array using mdraid that consists of two 1.5TB drives that I use for storage, the array is mounted at /Storage. I am running out of space in the array so I ordered two more 1.5TB drives to create a 4 drive RAID 1+0 array which will be 3TB big. My question is how do I create the new array and not lose any data?
The drives and partitions are sdc1, sdd1, and soon to be sde1, sdf1. I currently have 4 RAID arrays (md0,md1,md2,md3). I think I can create the RAID 1+0 array with the two new drives, copy the data from my current array to the new one, remove the old array, then add the two original drives to the new array. But I wanted to ask on here first to make sure my data doesn't go poof.
How do I recreate an LVM raid 1 partition, without destroying data on the discs? I have a 650GB data partition which is a raid 1 array with ext3. Two days ago, the system (Ubuntu 9.04) started to refuse to write to it, claiming no space left on device - even if there is ca. 102GB free left, if the disk is 85% full (according to df)! Interestingly, removing a couple of GB did not help, after reboot the disk was again full..
I did the "tune2fs -m 0" trick and then forced file check on next reboot by "sudo touch /forcefsck" .. and the result is that the raid partition is gone. I have the two physical drives /dev/sd*, unmounted, but the /dev/md1 is no longer there. What can I do to re-create it, without losing the data? I realized that I ran tune2fs on the physical partitions /dev/sd* - was I supposed to run it on /dev/md1?
I have acquired a dell 2850 poweredge server and installed ubuntu server onto only to find out we cant use linux for its intended use and need to uninstall remove ubuntu and it has a raid 5 array on the server.
I had a raid 5 + lvm 2 array and lost a disk. While it was recovering the array, the power was down and recovery stopped. When I recovered the power and start the machine the array was unable to start, it was degraded and the states were different between disks. Every disk watched the array in a different way. I put you the states:
/dev/sdd1 Number Major Minor RaidDevice State this 0 8 33 0 active sync /dev/sdc1
The first part in /dev/sdc1 is the same for all the devices, I just post you the states. Another thing is tha all the devices say that there is no superblock It seems that 3 disks are "active sync" but the states of the others doesn't match between them. And /dev/sdd1 is spare, the disk I added manually at first to start the recovery process.
Is there a way for me to mount a raid array member directly without using any of the raid tools? For instance, I have a raid 1 array that contains /dev/sda1 and /dev/sdb1. How can I mount /dev/sda1 or /dev/sdb1 directly? Doing mount /dev/sda1 <mnt point> does not work. If I try specifying the filesystem type with -t this doesn't work either.
I have been having some odd issues over the last day or so while trying to get a raid 5 array running in software under Kubuntu. I installed 3 1TB drives and started up, my sd* order got all messed up( sda was now sdc and so on). This wasn't entirely unexpected, so I fixed up fstab and booted again. I found all three of the drives I installed, set them to raid auto-detect and used mdadm to create /dev/md0. I then created mdadm.conf by piping the output of mdadm --detail --scan --verbose into /etc/mdadm.conf.At this point, everything was still going swimmingly. I copied over a few hundred GB of data from another failing drive and everything seemed ok. I went to reboot once the copy was done and everything just went weird. All of the sd* drives went back to the original. Of course, this meant that the mdadm.conf was wrong. I tried to just change the device list, but that didn't work. I then deleted mdadm.conf and rebooted. The drive list stayed in the original order this time, so I just tried manually starting the array.
By erasing the partition table of the 3rd drive, I've been able to get it to the status of spare, but it says it is busy when I try to add it to the array. A grep through dmesg makes me think that md has a lock on it. I'm not sure where to go with it now. If anyone has any pointers, I would like to hear them.
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1and I getmd1: raid array is not clean -- starting background reconstructionWhy is it not clean?Should I be worried?The HD is not new it has been used in before in a raid array but has beenrepartitionated.
I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it. then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says :Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error
mdadm: Not enough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:
root@warren-P5K-E:~# mdadm --detail /dev/md0 /dev/md0: Version : 0.90
I have been running a server with an increasingly large md array and always been plagued with intermittent disk faults. For a long time, I've attributed those to either temperature or power glitches. I had just embarked on a quest to a) lower case and drive temperature. They were running between 43 and 47C, sometimes peaking at 52C, so I've added more case fan power and made sure the drive cage was in the flow (it has it's own fan, too). Also, I've upgraded my power supply and made very sure that all the connectors are good. The array currently is a RAID6 with 5 Seagate 1,5TB drives.
When everything seemed to be working fine, I looked at my SMART logs and found that two of my drives (both well over 14000 operating hours) were showing uncorrectible bad blocks. Since it's RAID6, I figured, I couldn't do much harm, ran a badblocks test on it, zeroed the blocks that were reported bad, figuring the drive defect management would remap them to a good part of the disk and zeroed the superblock. I then added it back to the pack and the resync started. At around 50%, a second drive decided to go and shortly thereafter a third. Now, with two out of five drives, RAID6 will fail. Fine. At least, no data will be written to it anymore, however, now I cannot reassemble the array anymore.
Whenever I try I get this: Code: mdadm --assemble --scan mdadm: /dev/md1 assembled from 2 drives and 2 spares - not enough to start the array
Which is not fine. I'm sure that three devices are fine (normally, a failed device would just rejoin the array, skipping most of the resync by way of the bitmap) so I should be able to reassemble the array with the two good ones and the one that failed last, then add the one that failed during the resync and finally re-add the original offender. However, I have no idea how to get them out of the "(S)" state.
When we assemble a raid array, from where does it load configuration information for that array? I thought it refers to /etc/mdadm.conf file, but in my system, mdadm.conf file doesn't even contain all information. Still it is able to successfully assemble previously created device.
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
I need to mount my raid array on CentOS 5.2 samba server.
Here are my hardware specs: Motherboard: Tyan S2510 LE dual PIII CPU's: Intel PIII 850ghz socket 370 Memory: 4 gig Crucial 133 ECC SDRAM OS: 2 x'x IBM Travelstar 6.4 gig 2.5 hard drives, (low heat/noise) Storage: 4 x's Seagate 500 gig IDE 7200 rpm RAID controller: 3Ware 7500-12 controller, (RAID 5) (66 mhz PCI bus) NIC: 3COM 3C996B-T gigabit NIC, (66 mhz PCI bus)
I have the 2 IBM's set as RAID 1, (mirror) and the 4 Seagates as RAID 5, (1.5 TB) I have installed the OS with minor problems, (motherboard doesn't like the 2.6.18-128.1.14.el5 kernel, removed it from my grub.conf).
My problem is mounting the RAID array. I have done the following: formatted with fdisk; fdisk /dev/sdb Then formatted with the following command; mkfs.ext3 -m 0 /dev/sdb
The hard drive was formatted with the ext3 files system, but I have mounted it as an ext2 file system as I don't want 'journaling' to occur. I then edited my /ect/fstab like this: .....
Then: mount -a When I go into my "home" directory and type ls, I get the following: [root@hydra home]# ls -l total 24 drwx------ 2 zog zog 4096 Jun 23 15:50 zog lrwxrwxrwx 1 root root 6 Jun 23 15:46 home -> /home/ drwxrwxrwx 2 root root 16384 Jun 23 15:34 lost+found drwxr-xr-x 2 root root 4096 Jun 23 17:18 tmp Why my home directory is showing under home?
I recently upgraded a server from Fedora 6 to Fedora 14. In addition to the main hard drive where the OS is installed, I have 3 1TB hard drives configured for RAID5 (via software). After the upgrade, I noticed one of the hard drives had been removed from the raid array. I tried to add it back with mdadm --add, but it just put it in as a spare. I figured I'd get back to it later.Then, when performing a reboot, the system could not mount the raid array at all. I removed it from the fstab so I could boot the system, and now I'm trying to get the raid array back up.
I ran the following:mdadm --create /dev/md0 --assume-clean --level=5 --chunk=64 --raid-devices=3 missing /dev/sdc1 /dev/sdd1I know my chunk size is 64k, and "missing" is for the drive that got kicked out of the array (/dev/sdb1).That seemed to work, and mdadm reports that the array is running "clean, degraded" with the missing drive.However, I can't mount the raid array. When I try:mount -t ext3 /dev/md0 /mnt/fooI get:
mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try
We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode: dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.
I am learning software raid 1 with centos 5.5. I created the raid with out any problems and removed the first drive to check there was no problems and it booted. I have installed the old drive back in the system as hdc and need to resync the drives (used old drive as partitions correct) I thought I could use raidhotadd but id does not seem to exist anymore. how I resync the drives in the array hda primary and hdc secondary using mdadm
This is message I get when I try and start itmdadm: /dev/md0 assembled from 2 drives - not enough to start the arrayBelow is the information I've collected about any help on how I can get the raid back up and going to I can get the data off of it would be awesome
Trying to install RHEL5-AS 64bit onto HP DL320 G6, with RAID being a mirrored array on the Smart Array B110i SATA controller. The RAID is configured in the BIOS and seems fine.
When I install RedHat, I have to use 'linux dd' to use the HP provided driver (http://h20000.www2.hp.com/bizsupport...5&mode=4&idx=0) and that works fine during the installer, the GUI during the install sees the RAID just fine, it sees one volume, calling it the HP Volume. However, when the system boots after the install, the RAID is gone, and it's now seeing two drives, /sda and /sdb:
I have an Acer Altos EasyStore SATA NAS box that hung, the only way to reboot was to crash the system (unplug it). Upon reboot it was not recognising the hard drives (it wanted to do a destructive reinitialize). Most of the importent data was backed up, however some was overlooked and we'd quite like to get it back. Removing the disks and placing them in a PC with enough SATA bays to cope, and booting with a live linux distribution (System Rescue CD) I can see the 4 drives are not suffering hardware error and that the original partions exist. Using mdadm I can assemble the Arrays without error (seems to be three but the only one I am concerned with is the RAID5 array of about 3TB). /dev/m1p2 mounts as a loopdevice once an offset is entered. In turn this mounts as an XFS parition. However despite df showing the partition almost to be full. ls -l or ls -a on the mount point shows it to be empty!
I got thusfar using a translation from a German language forum, unfortunately I only speak a little German, and the only other English language post on a simlilar matter I found within that site had no replies. The next step was to unmount loop, then run xfs_chack and xfs_repair on the file system. xfs_check returns that there is are a few dir size and offset errors along with link count mismatches. This I would presume normal for a file system that has become slightly corrupted. xfs_repair (version 3.0.3) gets as far as Phase 3 it finds and corrects zerolength entries, offsets on directories and bogus inode numbers. However the final two lines are:
A search on the error missing out data size just returns code to generate it, is anybody able to explain what it means? Also remounting hard drive, ls and varients of still do not return anything. Am I missing some thing (root I am logged in with now would have different credentials presumably to root on the NAS box, so how do I get around this)?
Hello everyone! I use UBUNTU 9.10 (karmik). But I am totally disappointed!!! Everything is too slow! I want to install UBUNTU ULTIMATE 2.0. Can I do that without loosing any data? (music, movies, etc) Do I have to store my data in another HDD?
I have a system that has the following partitions:
Now SDC is a new drive I added. I would like to pool that new drive with the raided drives to give myself more space on my existing system (and structure). Is this possible since my raid already has data on it?
i have ubuntu 9.04 i want to know if i can do a partition base on what i have right now cause when i installed ubuntu i didnt do the partition to install windows so i want to do it now or in any case how can i reinstall ubuntu again without loosing my data store in my hard drive what is best to be perform in my pc i am very happy with ubuntu and wanna keep with it but sometimes a have some app in windows
I had i a dual boot win. 7 and ubuntu 9.10,recently i had some problem in my windows os so i restored the c drive to factory settings since both operating systems where in c drive so when i tried to boot grub was showing problem.the information displayed was loading grub, the file does not exist rescue grub> so what should i do to restore grub so that i can boot again into windows 7 and ubuntu without loosing my data.
I'm trying to install Ubuntu 10.10 in a desktop computer with three disks. SDA with NTFS in SDA1, where I have Windows XP, SDB where I had Ubuntu 10.04, and SDC where I have an NTFS partition. I want to install Ubuntu 10.10 in SDB without loosing the data in SDA and SDC. When I try to install it, when I choose specify manual partition, I only find this: Where is SDB abd SDC? What do I choose in Device for Boot Loader Installation?
I have a home directory which is mounted on the LVM partition,How can i reduce the size of LVM partiotion without loosing the data on home directory...whenever i use lvreduce command it show me a warning mesg that the whole data will be lost...reducing the size of LVM partition without loosing my home directory data.
In a nutshell, our RAID 1 array was rendered broken and we were advised that core lib files were missing and the OS needed to be reloaded... a quote from our server host:"The OS is not healthy.This server will need a reinstall. Libs are missing." This was after having replaced what we though was a faulty /dev/sdb. So they reloaded the OS (Debian 5.0.2 x86_64) on 2 FRESH drives, and installed the old /dev/sda as /dev/sdc once the reload was completed. Here's the output of /etc/fstab on the fresh install so we know what we're working with:
The one problem I see myself running into is /dev/md1 and /dev/md2 are currently in use by the new system, so I cannot mount it there. I should also note, reloading the OS is a viable option if needed as we haven't started configuring the server yet. So if we need to reinstall the OS and assign the NEW RAID arrays to something other than /dev/md1 and /dev/md2 then we can do that.
I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??