Ubuntu Servers :: RAID-1 OS Drives - Setting Up A Backup Procedure For The OS Drives
Jan 18, 2010
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
I currently have an Ubuntu 10.04 Server with 10 2TB hard drives (Hot Swappable). I discovered that having a software raid over 16TB is not supported, so I split the drives into 2 sections and have 2 Software Raid arrays storing my movies, audio, pictures, and other software. The total current usage is around 7TB. Since backing the files up to DVDs or even BlueRay is laughable, I am going to backup the system to 2TB hard drives probably 4 of them, the problem becomes that I can only hook one backup drive at a time into the system using a hot swap tray. Now I know that I can do this manually by copying the files one at a time to the drive until it is full, switching the drive out and repeating this, but I am hoping for an automated solution, Start backup, plug first drive in, system fills up drive, swap and repeat. Also it would be nice if the system remembered what had already been backed up so when I add files to the system, I only need to attach the last drive and not start the process over.
I'm looking for advise on which drives to add into my server for software raid 5. I would like to use 2TB drives for the array. The server currently boots off a RAID 1 array and I have a couple other drives mounted until I build a RAID 5 array with new drives. I've read horror stories on using Western Digital WD20EARS and Seagate ST32000542AS. So I'm wondering which large drives are best to use in software raid?
I am trying to use 3 3TB Western Digital drives in a raid 5 software array. The trouble seems to be that the array is created with only 1.5 TB of capacity, rather then the expected 6 TB.
Here are the commands and output: $ sudo dmraid -f isw -C BackupFull6 --type 5 --disk /dev/sde,/dev/sdf,/dev/sdg --size=5589G Create a RAID set with ISW metadata format RAID name: BackupFull6 RAID type: RAID5 RAID size: 5589G (11720982528 blocks) RAID strip: 64k (128 blocks) DISKS: /dev/sde, /dev/sdf, /dev/sdg About to create a RAID set with the above settings. Continue ? [y/n] :y
$ sudo dmraid -s *** Group superset isw_cdjhcaegij --> Subset name: isw_cdjhcaegij_BackupFull6 size : 3131048448 stride : 128 type : raid5_la status : ok subsets: 0 devs : 3 spares : 0
So I cannot understand why the size of the created array is only 3131048448 or about 1.5 TB. The first command seemed to imply it was going to create an array with 5589GB.
System is: Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid
I am using software RAID in Ubuntu Server Edition 9.10 to mirror(RAID1) two 1TB harddrives. These are used for data storage and websites.I also have a 80GB harddrive for the operatigsystem. This drive has no backup or RAID at all. Should this drive crash and the system therefore to become no longer bootable, will I be able to recover the data the 1TB drives or should I backup the 80GB drive as well?
I have a total of 4 hdd's, 500gb 7.2rpm that I would like mirrored using raid 10. As you can see from the image, ubuntu 9.10 server isn't recognizing the full 2tb's. In fact, I'm not even sure about the configuration as I was thinking the HDD's would come up as four 500gb hdds. Instead I have the configuration above set and ready for Ubuntu to be installed on.
1. Is this typical of a server pre-configured from Dell(perc6 raid controller.
2.Why is ubuntu not recognizing the full capacity of the drives especially when it's a server install?
I would like to install Fedora 11 on an ASUS P5L-VM 1394 motherboard with a 3 GHz Pentium 4 CPU. This is an LGA775 socket mobo with a Intel 945G chipset. Two SATA hard drives are plugged into SATA ports. An IDE DVD drive is plugged into the IDE/ATA port. Using the 32 bit Fedora 11 installation disk, I have seen two cases:
1) No hard drive recognized. When i get to the disk configuration screen, there are no options to choose from.
2) By monkeying around with the BIOS settings or switching the SATA ports the disks are connected to, I can get an alternative mode in which no drivers are found for the DVD drive either.
Currently, a version of Ubuntu is installed. UPDATE: The board was purchased in a P3-PH4C barebones, which for unknown reasons requires a different BIOS issue than the regular P5L-VM 1394. Updating to the most recent BIOS does not resolve the problem. One the installation procedure fails to recognize the hard drives, going into a shell and examining the boot up log shows that the kernel recognized both hard drives. So it's down to why the installation procedure is not recognizing them.
This is a strange problem. I have Ubuntu server installed on a proper server hardware. My RAID card reports all four HDDs to ubuntu as single drives, which is how i set it up because Ubuntu does not recognize the raid card on the server. Now you might say if thats the case, why dont i remove the raid card and have the BIOS report to ubuntu as four single drives then i can perhaps setup software raid. Well my board has only one sata port.Ubuntu is all setup. on the first drive and i have set the other three up using software RAID.
System works great only problem is it freezes sometimes. Not everytime, just on the odd occassion I use the same Hardware without the raid card and of course just one HDD and it great. No freezes.That leads me to believe its the RAID card.My question is why will it run great for days and sometimes just freeze on me? Probably silly but if theres an issue with the RAID card, it should not work at all, should it?
I was recently given two hard drives that were used as a raid (maybe fakeraid) pair in a windows XP system. My plan was to split them up and install one as a second HD in my desktop, and load 9.10 x64 on it, and use the other for mythbuntu 9.10. As has been noted elsewhere, the drives aren't recognized by the 9.10 installer, but removing dmraid gets around this, and installation of both ubuntu and mythbuntu went fine. On both systems after installation however, the systems broke during update, giving an "read-only file system" error and no longer booting.
Running fsck from the live cd gives the error: fsck: fsck.isw_raid_member: not found fsck: Error 2 while executing fsck.isw_raid_member for /dev/sdb and running fsck from 9.04 installed on the other hard drive gives an error like:
The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device>
In both cases I setup the drives with the ext4 filesystem. There's probably more that I'm forgetting... it seems likely to me that this problem is due to some lingering issue with the RAID setup they were in. I doubt its a hardware issue since I get the same problem with the different drives in different boxes.
I want to make a RAID5 array with 4 2TB hard drives. One of the drives is full of data so I will need to start with a 3 disks and then once I copy the data from the 4th onto the array, I will then add the 4th drive. This will be my first experience with RAID. I've spent a few hours searching for info but most of what I have found is a bit over my head.
I have recently installed a Asus M4A77TD Pro system board which supports raid.
I have 2 x 320gb sata drives I would like to setup raid-1 on. so far i have configured the bios to raid-1 for drives, but when installing Ubuntu 10.04 from the cd it detects the raid configuration but fails to format.
When I re-set all bios settings to standard sata drives ubuntu installs and works as normal but i have just 2 x drives without any raid options. I had this working in my previous setup but thats because i had the o/s on a sepreate drive from the raid and was able to do this within Ubuntu.
I've got a 10.10 installation, which I am using as a media/download server. Currently everything is stored on a 1TB USB drive.With the costs of disks falling, and the hassle of trying to back 1TB up to DVD (no, it's not going to happen) I was wondering if there's some linux/Ubuntu utility, which can use multiple disks to provide failover/resilience ... Could I just buy another 1TB drive, and have it "shadowing" the main, so that if one goes, I buy another, and then restore from the copy ?
I have a RAID 6 built on 6x 250GB HDDs w/EXT4. I will be upgrading the RAID to 4 2TB HDDs.
How would one go about this? What commands would need to be ran? I'm thinking about replacing the drives 1 at a time and letting it do the rebuild, but I know that would take a lot of time (which is fine). I don't have enough SATA ports to setup the new RAID and copy things over.
i have recently setup and installed Ubuntu 9.04 on a virtulal drive usingVMWare 6.04, installed the desktop gui as well, I need to add other drives for data and loggng, which I did in the VMWare side. I can see the 2 drives in ubuntu, but can not access them, I get he unable to mount location when I try. How can resolve this please as I need these to virtual drives to be used as data drives.
I've used it once before but got fed up with the boot asking me everytime I turned my laptop on because I wasn't using it enough. I have Windows 7 on drive C . I want to keep it on drive C. I have several 1.5TB+ drives, and one of them is not being used. I want to dedicate it to Ubuntu, and be able to do a dual boot with my Windows 7 install. Is this possible? If it is, what about when this drive is not connected to my laptop? Will that mess up the boot process?
I am building a home server that will host a multitude of files; from mp3s to ebooks to FEA software and files. I don't know if RAID is the right thing for me. This server will have all the files that I have accumulated over the years and if the drive fails than I will be S.O.L. I have seen discussions where someone has RAID 1 setup but they don't have their drives internally (to the case), they bought 2 separate external hard drives with eSata to minimize an electrical failure to the drives. (I guess this is a good idea)I have also read about having one drive then using a second to rsync data every week. I planned on purchasing 2 enterprise hard drives of 500 MB to 1 GB but I don't have any experience with how I should handle my data
What's the current best practice for automounting a remote drive for automated backup? I want to use Back in Time and maintain snapshots but it can't do that remotely so I have to mount a folder outside of Back in Time. I have used sshfs from this howto in the past and it works mostly but seems to lose connection and not reconnect a lot. [URL]. Is there a more "modern" way? NFS is horribly unreliable and dog slow for me so it's OUT unless it's changed in the last year.
I shall start off by saying that I have just jumped from Windows 7 to Ubuntu and am not regretting the decision one bit. I am however stuck with a problem. I have spent a few hours google'ing this and have read some interesting articles (probably way beyond what the problem actually is) but still don't think I have found the answer.I have installed:
I am running the install on an old Dell 8400. My PC has an Intel RAID Controller built into the MB. I have 1 HDD (without RAID) (which is houses my OS install) and then I have 2 1TB drives (These are just NTFS formatted drives with binary files on them nothing more.) in a RAID 1 (Mirroring) Array. The Intel RAID Controller on Boot recognizes the Array as it always has (irrespective of which OS is installed) however, unlike Windows 7 (where I was able to install the Intel RAID controller driver) .Does anyone know of a resolution (which doesn't involve formatting and / or use of some other software RAID solution) - to get this working which my searches have not taken me too?
I got my system up and running with the Grub installed on my primary hard drive. I have not installed 2 additional drives. I would like to combine the 2 additional drives into a RAID 1 array. I can only find tutorials on how to do this during initial install. I cannot find one that tell me how to do it after the install. Is there a way?
I have the newest Ubuntu installed on my machine. It currently has a 160GB sata drive. I just bought two shiny new 2TB drives that I want to RAID as 4TB. How do I go about adding these two hard drives even though install is complete? I want the 4TB assigned to my /home directory.
I have a RAID 1 that is mounted and working. But for some reason I can also see the individual drives under gnome Devices on gnome-shell. Is there a way to hide them from gnome or linux in general. (So only the raid 1 can be seen)