Ubuntu :: Use Multiple USB Drives To Provide RAID ?
Dec 14, 2010
I've got a 10.10 installation, which I am using as a media/download server. Currently everything is stored on a 1TB USB drive.With the costs of disks falling, and the hassle of trying to back 1TB up to DVD (no, it's not going to happen) I was wondering if there's some linux/Ubuntu utility, which can use multiple disks to provide failover/resilience ... Could I just buy another 1TB drive, and have it "shadowing" the main, so that if one goes, I buy another, and then restore from the copy ?
i have cretaed RAID on one of my server RAID health is ok but its shows warning. so what could be the problem. WARNING: 0:0:RAID-1:2 drives:153GB:Optimal Drives:2 (11528 Errors)
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
so I setup a raid ten system and I was wondering what that difference between the active and spare drives is ? if I have 4 active drives then 2 the two stripes are then mirrored right?
Ever need to provide access to multiple PC's and did not have a router only a hub. Maybe this isn't original thinking, but then again maybe you didn't think of doing it this way (which i am sure there are many ways to do it) So I have 2 Ubuntu Servers, 1 Windows Box and a Hub - All 3 with internet access off of single ip and single Ethernet port.
While searching for a backup method today I came across Clonezilla. I was wondering if this was the right thing for me and since I needed to backup my roommates PC for a reformat and install of Windows I decided I would give it a try, but only if it would work. I didn't want the hassle of going into the main part of the house and finding out what cord was what as there is a cable modem connected into a switch (4 static IP's with internet) and one port of the switch hooked to a router) Anyways, didn't work he was on the router I was on the switch)
But this got me thinking. When I setup my server to do this, during one of the setup scripts it said it was setting up Internet access for client machines and that it was assigning them IP addresses threw a DHCP server that it had installed.
So, I dug up the hub connected the internet cable to hub up link and Server 1 on port 1 Server 2 on port 2 and Windows on port 3 The main server gets the internet provided IP address and routes it to the hub via a virtual interface. Server 2 is configured for DHCP and the windows box, It was set to get info automatically but it didn't fill the DNS info so I had to manually do that (just a heads up) I decided to use OpenDNS Servers (208.67.222.222 & 208.67.220.220) but im sure putting in the gateway IP address would have worked too.
So, by now if you need this I am sure you are excited and want to get to it. Like i said there are probably other ways of doing it, ways that don't involve you installing clonezilla and DRBL, maybe even just DRBL is needed, maybe one of them installed whats needed as a dependency- all I know is it works, if you know - elaborate so people know, but hey- this way not only do you have internet access on all PC's you can deploy custom images to them as well.
So, at the moment I have a 7TB LVM with 1 group and one logical volume. In all honesty I don't back up this information. It is filled with data that I can "afford" to lose, but... would rather not. How do LVMs fail? If I lose a 1.5TB drive that is part of the LVM does that mean at most I could lose 1.5TB of data? Or can files span more than one drive? if so, would it just be one file what would span two drives? or could there be many files that span multiple drives drives? Essentially. I'm just curious, in a general, in a high level sense about LVM safety. What are the risks that are involved?
Edit: what happens if I boot up the computer with a drive missing from the lvm? Is there a first primary drive?
I was recently given two hard drives that were used as a raid (maybe fakeraid) pair in a windows XP system. My plan was to split them up and install one as a second HD in my desktop, and load 9.10 x64 on it, and use the other for mythbuntu 9.10. As has been noted elsewhere, the drives aren't recognized by the 9.10 installer, but removing dmraid gets around this, and installation of both ubuntu and mythbuntu went fine. On both systems after installation however, the systems broke during update, giving an "read-only file system" error and no longer booting.
Running fsck from the live cd gives the error: fsck: fsck.isw_raid_member: not found fsck: Error 2 while executing fsck.isw_raid_member for /dev/sdb and running fsck from 9.04 installed on the other hard drive gives an error like:
The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device>
In both cases I setup the drives with the ext4 filesystem. There's probably more that I'm forgetting... it seems likely to me that this problem is due to some lingering issue with the RAID setup they were in. I doubt its a hardware issue since I get the same problem with the different drives in different boxes.
I want to make a RAID5 array with 4 2TB hard drives. One of the drives is full of data so I will need to start with a 3 disks and then once I copy the data from the 4th onto the array, I will then add the 4th drive. This will be my first experience with RAID. I've spent a few hours searching for info but most of what I have found is a bit over my head.
I have recently installed a Asus M4A77TD Pro system board which supports raid.
I have 2 x 320gb sata drives I would like to setup raid-1 on. so far i have configured the bios to raid-1 for drives, but when installing Ubuntu 10.04 from the cd it detects the raid configuration but fails to format.
When I re-set all bios settings to standard sata drives ubuntu installs and works as normal but i have just 2 x drives without any raid options. I had this working in my previous setup but thats because i had the o/s on a sepreate drive from the raid and was able to do this within Ubuntu.
I have a RAID 6 built on 6x 250GB HDDs w/EXT4. I will be upgrading the RAID to 4 2TB HDDs.
How would one go about this? What commands would need to be ran? I'm thinking about replacing the drives 1 at a time and letting it do the rebuild, but I know that would take a lot of time (which is fine). I don't have enough SATA ports to setup the new RAID and copy things over.
I shall start off by saying that I have just jumped from Windows 7 to Ubuntu and am not regretting the decision one bit. I am however stuck with a problem. I have spent a few hours google'ing this and have read some interesting articles (probably way beyond what the problem actually is) but still don't think I have found the answer.I have installed:
I am running the install on an old Dell 8400. My PC has an Intel RAID Controller built into the MB. I have 1 HDD (without RAID) (which is houses my OS install) and then I have 2 1TB drives (These are just NTFS formatted drives with binary files on them nothing more.) in a RAID 1 (Mirroring) Array. The Intel RAID Controller on Boot recognizes the Array as it always has (irrespective of which OS is installed) however, unlike Windows 7 (where I was able to install the Intel RAID controller driver) .Does anyone know of a resolution (which doesn't involve formatting and / or use of some other software RAID solution) - to get this working which my searches have not taken me too?
I'm looking for advise on which drives to add into my server for software raid 5. I would like to use 2TB drives for the array. The server currently boots off a RAID 1 array and I have a couple other drives mounted until I build a RAID 5 array with new drives. I've read horror stories on using Western Digital WD20EARS and Seagate ST32000542AS. So I'm wondering which large drives are best to use in software raid?
I got my system up and running with the Grub installed on my primary hard drive. I have not installed 2 additional drives. I would like to combine the 2 additional drives into a RAID 1 array. I can only find tutorials on how to do this during initial install. I cannot find one that tell me how to do it after the install. Is there a way?
I am trying to use 3 3TB Western Digital drives in a raid 5 software array. The trouble seems to be that the array is created with only 1.5 TB of capacity, rather then the expected 6 TB.
Here are the commands and output: $ sudo dmraid -f isw -C BackupFull6 --type 5 --disk /dev/sde,/dev/sdf,/dev/sdg --size=5589G Create a RAID set with ISW metadata format RAID name: BackupFull6 RAID type: RAID5 RAID size: 5589G (11720982528 blocks) RAID strip: 64k (128 blocks) DISKS: /dev/sde, /dev/sdf, /dev/sdg About to create a RAID set with the above settings. Continue ? [y/n] :y
$ sudo dmraid -s *** Group superset isw_cdjhcaegij --> Subset name: isw_cdjhcaegij_BackupFull6 size : 3131048448 stride : 128 type : raid5_la status : ok subsets: 0 devs : 3 spares : 0
So I cannot understand why the size of the created array is only 3131048448 or about 1.5 TB. The first command seemed to imply it was going to create an array with 5589GB.
System is: Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid
I have the newest Ubuntu installed on my machine. It currently has a 160GB sata drive. I just bought two shiny new 2TB drives that I want to RAID as 4TB. How do I go about adding these two hard drives even though install is complete? I want the 4TB assigned to my /home directory.
I am going to setup a new Ubuntu 10.04 using RAID 1 soon. Installation will be via the alternate CD. Older distributions required manually installing Grub to the second drive, to boot if the first drive fails. I found different statements about how this is handled since 9.10.e.g.
Quote:
Install GRUB boot-loader on second drive (this step is not need if you use Ubuntu 9.10)
or
Quote:
installing GRUB to second hard drive depending on your distribution
> grub-install /dev/md0 or > grub-install /dev/sda > grub-install /dev/sdb
is Grub2 automatically installed in all RAID drives using alternate CD 10.04 like executing sort of "grub-nstall /dev/md0" during the installation ?
how to migrate my whole server to larger hard drives (i.e. I'd like to replace my four 1TB's with four 2TB's, for a new total of 4TB instead of 2TB)... I'll post the output from everything (relevant) that I can think of in code tags below.
I'd like to end up with much larger /home and /public partitions. When I first set up raid and then LVM it seemed like it wouldn't be too hard once this day arrived, but I've had little luck finding help online for upgrades and resizing versus simply rebuilding from a failure. Specifically, I figure I have to mirror the data over to the new drives one at a time, but I can't figure out how to build the raid partitions on the new disks in order to have more space (yet mirror with the old drive that has a smaller partition)... don't the raid partitions have to be the same size to mirror?
Ubuntu Server (karmic) 2.6.31-22-server #65-Ubuntu SMP; fully updated
I have two 1.5TB hard drives (neither one are my OS drives) that don't show up under "Places", but are detected under "Disk Utility". I tried to reformat them, but Ubuntu tells me that they are in use (even they are not mounted). gparted also detects them and shows them as being NTFS parition. I have deleted and repartitioned to NTFS as well. This works until I restart my computer. The funny thing is that Windows 7 sees them just fine. More detail as to how this happened below after system specs.
System:
AMD Phenom II x 4 955 ASUS M4A79XT EVO motherboard 8GB DDR3 1333 1 500GB WD SATA drive (dual boot Windows 7 & Ubuntu 10.10) 2 1TB WD SATA drives (extra storage) 2 1.5TB Seagate SATA drives (extra storage, these are the problem children)
Here's how I got here:
Installed a dual boot w/ Windows 7 and Ubuntu 10.10. Everything was fine and ALL drives showed up and were mountable in Ubuntu. I decided to set up a RAID 1 w/ my two 1.5TB drives. I restarted, changed my SATA to be RAID instead of IDE and created my RAID 1. I then realized that through this motherboard's RAID setup, I couldn't have it copy files from one disk to the other to set up the mirror. So, after I rebooted and the RAID started building, I cut the power and unplugged my drives in a desparate attempt to keep my data. I then went back into the bios and set my SATA back into IDE instead of RAID. I was able to back up my data, but this is when the problem started.
Again, Windows 7 sees and uses the drives just fine. I copied the data I wanted from my 1.5TB drives to my 1TB drives and restarted into Ubuntu. But, no 1.5TB drives appeared under Places. I started Disk Utility and confirmed that Ubuntu does actually see the drives. However, it still lists them as being part of a RAID array (I did delete my RAID array properly through the BIOS after backing up my data). I'm not sure why it thinks that and I believe that's my problem. Also, Disk Utility lists a THIRD 1.5TB drive under "USB and Peripherals". Could that be my MB telling Ubuntu that a RAID is still set up even though I deleted the RAID pair?
What I have tried:Reformatted drives via Windows 7 as NTFS. This completed but didn't solve my problem.Repartitioned the drives with gparted as NTFS. This works until I restart my computer. Attempted to reformat under Ubuntu, but it gives me an error saying the drives are busy.Reinstalled Ubuntu (but didn't reformat). Didn't work. What I'm thinking:Flash my BIOS so my MB starts out fresh and hopefully doesn't tell Ubuntu I have a RAID anymore. Reinstall Ubuntu again (this time reformatting my OS drive). Anyone have ideas as to what's going on? FYI I'm new to Ubuntu.
I have three 640GB sata hard drives that I would like to put into a raid 5 configuration. I would like to opt for a software raid 5 so its hardware independent. I was trying to follow these instructions, but they seem a bit dated.
So windows wouldn't recognize my drives as a raid setup, so I disabled it and switched to IDE, now Ubuntu 9.10 installation will only recognize my drives as RAID. I have and ASUS M3A32-MVP Deluxe Series motherboard, it has 4 sata connectors and 2 marvel controlled sata connectors. In the 4 sata connectors I have my 2 wd 500gb hds, my dvd burner, and my external usb, esata slots. In the marvel controlled sata connector I have a wd 160gb hd. Originally when I built the computer I wanted a raid setup with the 2 500 gb hds.
But windows wouldn't recognize the raid set-up and wouldn't boot properly. So I said screw it and removed the raid and set all the drives to IDE. Then, when I tried to install Ubuntu 9.04 it would only recognize my 2 500 gb hds as raid. Gparted recognizes the drives as both raid and IDE. Eventually, after a day or two of praying and messing around the installer recognized both drives as raid and IDE. A couple months later here I am trying to install Ultimate Edition 1.4.
I have a RAID 1 that is mounted and working. But for some reason I can also see the individual drives under gnome Devices on gnome-shell. Is there a way to hide them from gnome or linux in general. (So only the raid 1 can be seen)
I am trying to set up a mdadm raid in a new PC that I am building for home theatre. the machine boot just fine from /dev/sdc running ubuntu 9.10 However in gparted /dev/sda and dev/sdb show to be part of /devmapper/sil_ajbicfacbaej Both dev/sda and /dev/sdb were drives that used to be part of a sil hardware raid on a previous machine. I would like to use them as a new mdadm raid on this new machine the old hardware card was really quite slow. the drives are now pluged into the MB and should bw much faster there.
My computer has 2 40GB hard drives (yes, it's really old). One of these hard drives has Ubuntu installed on it, and I would like to use the second hard drive as a data storage device that is usable by anyone who just wants a random place to drop random stuff. How do I do this?
I know I can simply create a degraded raid array and copy the data to the other drive like this: mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
But I want the specific disk to keep the raw ext3 filesystem so I can still use it from FreeBSD. When using the command above the disk will be a raid disk and I can't do a mount /dev/sdb1 anymore. A little background info. The drives in question are used as backup drives for a couple of Linux and FreeBSD servers. I am using the Ext3 filesystem to make sure I can quickly recover the data since both FreeBSD and Linux can read from that without problems. If someone has a different solution for that (2 drives in raid 1 that are readable by FreeBSD and writeable by Linux),
I'm renting a dedicated server with a company that claims that the server has 2 hard drives in a software RAID 1 array, but I need to make sure that the server really has the 2 HDD, and the size of the 2nd drive... how to do that ?? system is Centos 5.3