I want to make a RAID5 array with 4 2TB hard drives. One of the drives is full of data so I will need to start with a 3 disks and then once I copy the data from the 4th onto the array, I will then add the 4th drive. This will be my first experience with RAID. I've spent a few hours searching for info but most of what I have found is a bit over my head.
I'm renting a dedicated server with a company that claims that the server has 2 hard drives in a software RAID 1 array, but I need to make sure that the server really has the 2 HDD, and the size of the 2nd drive... how to do that ?? system is Centos 5.3
I'm about to install Ubuntu on two 250-gigabyte hard drives in a RAID 1 array, but I'm confused about how to partition my hard drives. How much space should I give to each partition? How many partitions should I create and where should I mount them? (I should mention that Ubuntu will be the only OS on this array.)
I got my system up and running with the Grub installed on my primary hard drive. I have not installed 2 additional drives. I would like to combine the 2 additional drives into a RAID 1 array. I can only find tutorials on how to do this during initial install. I cannot find one that tell me how to do it after the install. Is there a way?
I am trying to use 3 3TB Western Digital drives in a raid 5 software array. The trouble seems to be that the array is created with only 1.5 TB of capacity, rather then the expected 6 TB.
Here are the commands and output: $ sudo dmraid -f isw -C BackupFull6 --type 5 --disk /dev/sde,/dev/sdf,/dev/sdg --size=5589G Create a RAID set with ISW metadata format RAID name: BackupFull6 RAID type: RAID5 RAID size: 5589G (11720982528 blocks) RAID strip: 64k (128 blocks) DISKS: /dev/sde, /dev/sdf, /dev/sdg About to create a RAID set with the above settings. Continue ? [y/n] :y
$ sudo dmraid -s *** Group superset isw_cdjhcaegij --> Subset name: isw_cdjhcaegij_BackupFull6 size : 3131048448 stride : 128 type : raid5_la status : ok subsets: 0 devs : 3 spares : 0
So I cannot understand why the size of the created array is only 3131048448 or about 1.5 TB. The first command seemed to imply it was going to create an array with 5589GB.
System is: Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid
I am learning software raid 1 with centos 5.5. I created the raid with out any problems and removed the first drive to check there was no problems and it booted. I have installed the old drive back in the system as hdc and need to resync the drives (used old drive as partitions correct) I thought I could use raidhotadd but id does not seem to exist anymore. how I resync the drives in the array hda primary and hdc secondary using mdadm
This is message I get when I try and start itmdadm: /dev/md0 assembled from 2 drives - not enough to start the arrayBelow is the information I've collected about any help on how I can get the raid back up and going to I can get the data off of it would be awesome
I have a home samba server with a 3ware Escalade 8506-8. I have 5 x 500 gig hard drives in a RAID 5 array. Recently, my 8506 died and I need to get a new one. However, I saw a 3ware Escalade 9500S-12 on ebay for about $20.00 dollars more than a replacement 8506-8.
My question is, if I put my drives on the 9500S, will it recognize my existing RAID array? Or will it want to build a new RAID array and format all of my data?Hope I have asked this question clearly, little short on sleep this week.
What is the best way to install Windows and Linux on two-hard-disk array? In fakeraid there are no problems in Win, but linux installation is almost impossible (i've tried unsuccessfully...). In software raid it would be impossible to share files between win and linux? And finally hardware raid is possible, but cheap controllers have low performance. Is there any other way (apart from spending a lot of $$ for adaptec controller) ?
I'm looking to stock my SuperMicro P8SCi with two 1-2 TB SATA hard discs, for running backups and web hosting. There are reviews of certain disks stating that the low-power disks will get kicked out of the Raid due to their slow response time, and it also appears that there have been quality problems with these newer disks, as if the race to size has lowered their reliability.
Can someone recommend a good brand and specific disks that you've had experience with? I'd rather not need to replace these after putting them in, but I also don't want to pay significantly more for an illusion of quality.
I have the newest Ubuntu installed on my machine. It currently has a 160GB sata drive. I just bought two shiny new 2TB drives that I want to RAID as 4TB. How do I go about adding these two hard drives even though install is complete? I want the 4TB assigned to my /home directory.
I have three 640GB sata hard drives that I would like to put into a raid 5 configuration. I would like to opt for a software raid 5 so its hardware independent. I was trying to follow these instructions, but they seem a bit dated.
So windows wouldn't recognize my drives as a raid setup, so I disabled it and switched to IDE, now Ubuntu 9.10 installation will only recognize my drives as RAID. I have and ASUS M3A32-MVP Deluxe Series motherboard, it has 4 sata connectors and 2 marvel controlled sata connectors. In the 4 sata connectors I have my 2 wd 500gb hds, my dvd burner, and my external usb, esata slots. In the marvel controlled sata connector I have a wd 160gb hd. Originally when I built the computer I wanted a raid setup with the 2 500 gb hds.
But windows wouldn't recognize the raid set-up and wouldn't boot properly. So I said screw it and removed the raid and set all the drives to IDE. Then, when I tried to install Ubuntu 9.04 it would only recognize my 2 500 gb hds as raid. Gparted recognizes the drives as both raid and IDE. Eventually, after a day or two of praying and messing around the installer recognized both drives as raid and IDE. A couple months later here I am trying to install Ultimate Edition 1.4.
I am currently trying to configure a set of hard drives as a RAID configuration. My system is running with Red Hat Enterprise Linux Client release 5.1 as the base OS. I am booting from CD. I am trying to image a set of drives that have not been imaged before. When the GUI dialog window for disk setup is displayed, it shows a default disk layout including a LVM slice. In the disk layout is a /boot partition already. It is not what I would like so I edit it to be the size for my system and make it the primary partition. I also select it to be a software RAID. I then add three more partitions for my drive 'A' all of type software RAID and NOT primary partitions.
At this point my drives have the correct number of partitions except for showing the LVM slice. I select 'RAID' again, followed by selecting 'Clone a drive to create a RAID device ...' followed by 'OK'. I then get a dialog to select the source and target. i select my drive 'A' to be the source and 'B' to be the target followed by 'OK'. An error dialog is received stating that all the partitions are not of type software RAID. The disk partitions are all type software RAID except the extended LVM slice. I can not get past this point and I am following a procedure written some time ago by a person that is not available.
I have been battering with FC10 and software RAID for a while now, and hopefully I will have a full working system soon. Basically, I tried using the F10 live CD to setup Software RAID 1 between 1 hard drives for redundancy (I know its not hardware raid but budget is tight) with the following table;
[Code]....
I set these up by using the RAID button on the partition section of the install except swap, which I set-up using the New partition button, created 1 swap partition on each hard drive that didn't take part in RAID. Almost every time I tried to do this install, it halted due to an error with one of the raid partitions and exited the installer. I actually managed it once, in about...ooo 10-15 times of trying but I broke it. After getting very frustrated I decided to build it using just 3 partitions
[Code]....
I left the rest un-touched. This worked fine after completing the install and setting up grub, reboot in to the install. I then installed gparted and cut the drive up further to finish my table on both hard drives. I then used mdadm --create...etc to create my RAID partitions. So I now have
i have cretaed RAID on one of my server RAID health is ok but its shows warning. so what could be the problem. WARNING: 0:0:RAID-1:2 drives:153GB:Optimal Drives:2 (11528 Errors)
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
so I setup a raid ten system and I was wondering what that difference between the active and spare drives is ? if I have 4 active drives then 2 the two stripes are then mirrored right?
I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
I am building a home server that will host a multitude of files; from mp3s to ebooks to FEA software and files. I don't know if RAID is the right thing for me. This server will have all the files that I have accumulated over the years and if the drive fails than I will be S.O.L. I have seen discussions where someone has RAID 1 setup but they don't have their drives internally (to the case), they bought 2 separate external hard drives with eSata to minimize an electrical failure to the drives. (I guess this is a good idea)I have also read about having one drive then using a second to rsync data every week. I planned on purchasing 2 enterprise hard drives of 500 MB to 1 GB but I don't have any experience with how I should handle my data
I am using Ubuntu 10.04 x64. I am not trying to install Ubuntu on a RAID 1 drive like all of the guides are for. I have a RAID 1 array that I am using for data storage. In windows it shows as a single array just fine. In linux it shows as 2 separate drives. I don't care how they show up to be honest I just have to data written to one drive written to the other automatically as well so my RAID isn't screwed up. Looking through different articles and forums I find a lot of stuff saying that it should show up under /dev/mapper/dxxx or something under /dev/mapper. All that shows up there for me is a device called control which doesn't seem to do something.
I just installed Ubuntu 10.10 64bit and wanted to get access to my nvidia RAID array. This array is working, and is NTFS formatted. But wasn't showing up through normal means in Ubuntu. (for example the NTFS Configuration Tool didn't display it) Here's what the system showed.
Code:
root@hermes:~# ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 59 2010-11-03 22:39 control lrwxrwxrwx 1 root root 7 2010-11-03 22:42 nvidia_dadijiag -> ../dm-0
[code]....
Is my mirror still in effect, or did i just mount one of the specific drives from the mirror?
I'm going a little bit crazy. I can't seem to remove my RAID 1 arrays. Any suggestions? I don't need to save data. The drives are empty. I'm upgrading to 4 2TB drives.Running Lucid Lynx server
How do i create a raid0 array of these drives? On each drive i have got a partition whose size is 160GB and formatted to type fd (Raid Autodetect) i have tried the following:
Code: mdadm --create /dev/md0 --level=0 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sda1 mkfs.ext4 /dev/md0 mount /dev/md0 /media/raid but for some reason it doesnt work.
Consider the following setup: Ubuntu system installed on a separate SSD for speed. An ubuntu software RAID array consisting of X number of physical HDD's for storage (RAID6 or RAID10). RAID setup is done during system install. If I suffer a total crash of the SSD and loose my system, will I be able to, using a new system disk, "reconnect" to the RAID array even if the "mothersystem" of the software RAID is lost? If yes, are there any particular config- or system files I need to backup to be able to rescue the array or will it just be recognized "out-of-the-box" when reinstalling ubuntu?
I have Fedora 14 installed on my main internal drive. I have one Fedora 14 and one Fedora 15 installed on two separate USB drives.When I boot into any of these drives, I can't access any of the other hard drives from the other drivesll I can, but just the boot partitions.Is there any way of mounting the other partitions so I can access the information?---------- Post added at 12:42 PM ---------- Previous post was at 09:34 AM ----------I guess even an explanation on why I can't view them would be good too.
I'm just about to be given a Power Mac G5 (Late 2005) Dual 2.0GHz. I think this was the last G5 produced.I plan on using it as file server/NAS and will probably run 10.04 LTS (or maybe 8.04 LTS). I would install a SATA RAID controller and run 4 1TB drives in a RAID 5 configuration. The only thing I'm unsure about is choosing a compatible RAID controller. I need to find a RAID controller that
- Is PCIe - Is compatible with both the Power Mac and Ubuntu PPC - Does true hardware RAID - Doesn't cost a fortune!
Am I right in thinking that the card might need to be open firmware compatible? If it makes any difference, I plan on running the OS from a separate 5th drive. I've found this on eBay. I asked the seller and he claims it supports true hardware RAID and says the chipset is a Silicon Image SIL3124. I does seem suspiciously cheap though...