Ubuntu Installation :: Raid 0 - Two Hard Disk Array
Jul 8, 2010
What is the best way to install Windows and Linux on two-hard-disk array? In fakeraid there are no problems in Win, but linux installation is almost impossible (i've tried unsuccessfully...). In software raid it would be impossible to share files between win and linux? And finally hardware raid is possible, but cheap controllers have low performance. Is there any other way (apart from spending a lot of $$ for adaptec controller) ?
I had done a new lucid install to a 1 TB RAID 1 array using the alternate CD a few weeks back. I messed up that system trying to some hardware working that lucid doesn't have drivers for yet, so I gave up on it and reinstalled to a single 80 GB disk that I now want to move over to the RAID array.
I moved all of the existing files on the array to a single folder, then copied all of the folders from the 80 GB disk over to the array with permissions and symlinks (minus the contents of /proc and /sys, which I created empty).
These are the commands I used:
p -a -d -R -v -t /media/raid_array /b* cp -a -d -R -v -t /media/raid_array /d* cp -a -d -R -v -t /media/raid_array /e* cp -a -d -R -v -t /media/raid_array /h*
I tried to change fstab to use the 689a... for root, but when I try to boot, it's still trying to open /dev/disk/by-uuid/412d...
So then I booted from the single disk again and chrooted into the array, then ran update-initramfs -u. I got 3 "grep: /proc/modules: No such file or directory" errors, and "cat: /proc/cmdline: No such file or directory"- so I created directory /proc/modules, created an empty file /proc/cmdline, and ran the initramfs update again. Then I tried to shut down, which hung (probably because I was doing all of this from a terminal window in Gnome), so I killed the power after a couple of minutes.
It's still trying to use /dev/disk/by-uuid/412d... to boot.
What am I missing? I assume I just have to change the UUID to mount as root, but I don't know how.
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
I'm about to install Ubuntu on two 250-gigabyte hard drives in a RAID 1 array, but I'm confused about how to partition my hard drives. How much space should I give to each partition? How many partitions should I create and where should I mount them? (I should mention that Ubuntu will be the only OS on this array.)
I am using Ubuntu 10.10. I have a system set up with 1tb HD. I also have another 1tb HD which I'd like to use to mirror the other drive. So if the primary HD fails I can boot and operate from the mirrored drive. I've read that this is possible by using Raid. however I am confused if it is possible to set-up with a HD which is already set-up Ubuntu system. Also what what I can make out the mother board does not have a raid option.
I want to make a RAID5 array with 4 2TB hard drives. One of the drives is full of data so I will need to start with a 3 disks and then once I copy the data from the 4th onto the array, I will then add the 4th drive. This will be my first experience with RAID. I've spent a few hours searching for info but most of what I have found is a bit over my head.
I went to setup my linux box and found that the OS drive had finally died. It was an extremely old WD raptor drive in a hot box full of drives so it was really only a matter of time before it just quit on me. Normally this wouldn't be such a big deal however I had just recently constructed an md RAID5 array of 3 1TB disks to act as an NFS mount for basically all of my important files. Maybe 2-3 weeks before the failure I had finished moving all of my most important stuff onto that array. Now I know that the array is intact. All the required data is sitting on those disks. Since only the OS level disk failed on me I should be able to get a new disk in there, reinstall ubuntu and then rebuild that array. how exactly do I go about doing that with mdadm? Do I create the array from the /dev character devices like when I initially built the array?
I'm renting a dedicated server with a company that claims that the server has 2 hard drives in a software RAID 1 array, but I need to make sure that the server really has the 2 HDD, and the size of the 2nd drive... how to do that ?? system is Centos 5.3
I'm looking to stock my SuperMicro P8SCi with two 1-2 TB SATA hard discs, for running backups and web hosting. There are reviews of certain disks stating that the low-power disks will get kicked out of the Raid due to their slow response time, and it also appears that there have been quality problems with these newer disks, as if the race to size has lowered their reliability.
Can someone recommend a good brand and specific disks that you've had experience with? I'd rather not need to replace these after putting them in, but I also don't want to pay significantly more for an illusion of quality.
concerning Linux, mdadm, and creating RAID Array's in Debian. I've done a lot of reading and research on RAID both on this board and elsewhere (The Linux Documentation Project's Software-RAID HOWTO is especially good), but I've run across something that no one seems to explain, and I'm not sure why. I'm instructed to create partitions on the drives I wish to add to my array. These partitions inevitably take up the whole disk, and are always have their system IDs set to "Linux raid autodetect". What I don't understand is why, after creating these partitions, some guides then go on to create an array (say a RAID5 one) with just the disks themselves as members, while others go on to create the RAID5 array with the previously created partitions as members. E.g.,
Our server is a CybertronPC I2XV9080 Imperium Tower. It is equipped with a supermicro X7DVL-I Motherboard and Quad 750 GB SATA2 RAID edition hard drives in a raid 5 array. We tried to install Centos on the Raid5 array with Device-Mapper as the LVM. In the BIOS SATA Raid was enabled and the ICH RAID code base option was set to [Intel].
Intel Matrix Storage Manager Option ROM V22.214.171.1242 ESB2 RAID ID Name Level Strip Size Status Bootable 0 Raid5 Raid 5 64KB 80GB Normal Yes 1 Raid_5 Raid 5 64kB 2000GB Normal Yes[code].....
Can I have multiple level raids across the same array or would that lead to problems as above? Is the root cause of my problem the fact that intel raid5 is not supported for Linux as based on the following link http:[url]....
We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode: dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.
i am currently trying to do software raid 1 on a running ubuntu 9.10 system with mdadm. I might have done something wrong and im trying to go back from the beginning. Does anyone know how to remove all the raid info from a harddisk and get it back to its original state.
I wanted to implement raid5 such that one partition is from my laptop's hard disk and others from other hard disks. After making one partition a raid partition, I rebooted the system. The computer stopped mid-way during booting, and brought me to the shell. On typing fsck -p, it told me an unexpected error occured in the partition which I had made for raid. Is there some condition that we cannot boot from a disk containing one of the raid partitions ?
After upgrading my ubuntu install my raid array is gone. The drives appear in blkid as "Linux raid member" and both have the same uuid. If I try to mount the drive via fstab I get a message that the drive is not ready or present. If I try to mount each of the two drives, one mounts successfully the other reports serious errors. Issuing a cat /proc/mdstat shows md_d0 as inactive.How can I re-establish my raid array? I have the data backed up so if I have to wipe out the disks to start over that's an option.
I currently have a nice HTPC setup that has been upgraded from distribution to distribution since 8.xx all the way up to 9.10 now. I just moved to a new place and it feels like the right time to do a fresh install of 10.04 into the HTPC. The problem is that I have a RAID 5 array in the system that has all my pictures, videos, music, etc. This OS is installed in a separate drive that is not part of the RAID array (I have 4 drives in the system, 3 in the array, 1 for the OS). what is the general process I should follow to do:
1. a fresh install of 10.04
2. do #1 while at the same time not losing my array (don't think I would anyway).
3. what to do after install to get the array back up and running and mounted.
I am using the 10.04.1 x64 Kubuntu live CD to install Kubuntu on my FakeRAID 0 array, I tell it not to install grub as i know it is still currently broken. the install goes flawlessly. However on first boot using my live grub CD unless i tell my computer to point to the CD it will hang (which it is told to boot from CD first so i'm not sure why it does.) When i tell it to boot to Linux, it will not boot saying the kernel is missing files (to much to sadly list, all i do not understand) then offers me a terminal to input "help" into for a list of Linux commands. Windows 7 pro x64 works just fine CD was downloaded VIA P2P if it matters
repairing the MBR on my raid array. I have three disks, each with three paritions:root (sda1 sdb1 sdc1) 59GB swap (sda2 sdb2 sdc2) 1.12GB grub/boot (sda3 sdb3 sdc3) 298MB I have been able to get this running and it has been working fine for several months. A few days ago, I installed 10.04 to a USB stick but did not disable the hard drives at that point and so the MBR was overwritten. If I leave the USB stick in, it boots fine from that stick. However now I can't get the boot from the raid array to work correctly. I can do the following:Load 10.04 from the Live CD install mdadm recreate the root partition using
I can mount and view the files on md0 with no problems. It's not corrupted in any way. When I installed, I followed the directions to make each of the grub drives bootable. However I don't know for sure whether grub was installed on each partition separately or if it was installed on the assembled partition only. I have tried using
sudo grub-install /dev/sda3
and got warnings, something to the effect
Cannot find a device for /boot/grub no path or device specified Auto-detection of a filesystem module failed specify the module with option '--module' explicitly
I have also been able to get to the grub rescue prompt but my keyboard (wireless USB) is not recognized and so I can't type anything in at that point.
We have a server with RAID 0 with 4 hard disks on it each 250 GB. Linux kernel must find one hard disk named: /dev/sda with 1TB capacity. right? And also we have 2 partitions on sda: sda1 and sda2. We want to add another partition but we don't have enough space.
Now the problem: If we add another hard disk and run Code: fdisk -l Will the /dev/sda space incremented automatically so we can add new partitions or we must do something?
I wanted to merge my 1TB disks into and RAID 5 array, 4 of them in RAID 5 is above 2Terabytes limit of msdos partition tables which grub2 can boot from, so I decided to start up the system from scratch, by building it on GPT partitions, but seems grub2 won't boot from GPT partition because it drops to grub rescue and I can't really do anything from there.
I installed Debian 5.0.3 (Backport with .34 Kernel), because my server hardware (Dell PowerEdge R210) needs special firmware and drivers.However, the installation went quite smooth.I put the system on a RAID 1 Array with about 500 GB space.s I said the installation went well, however, it doesn't boot! No GRUB, nothing
I have an HTPC that was giving me insane amount of problems after 3 months of good use.
1x 250gig Samsung Drive (OS Drive)
3x 1TB Western Digital Caviar Green (Raid-5)
In 9.10, the raid was working fine. I decided to fresh install Ubuntu 10.04 and I can't seem to start the raid array. In Disk Utility, the array shows up but when I try to start it I get the error "Not enough components to start the array"
I've tried to assemble the array using mdadm and the following:
I'm running 10.04 x86 server with a really simple installation on a single 250GB boot disk. I then have a RAID5 array as /dev/md0 (set up using mdadm with x4 2TB disks). All is working well. My mdadm.conf file looks like this
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file.
if I was to lose the boot disk and need to remount the RAID array on a fresh installation, what steps do I need to go through. My assumption is that the superblocks on the RAID disks will be used and I don't need to keep any additional information - is this right?
I'm currently using Windows Vista 32-bit on a RAID 1 array; I'm using the RAID provided by my motherboard so it's fakeRAID. Anyway, I'd like to do some C development under Linux but I'm not exactly sure how to go about installing it on a software RAID 1 array without messing up Windows. I'm not sure which Linux distro I'm going to install, so I'm hoping that information isn't important. Would I just resize my Windows partition and put Linux on the newly created partition? Do I have to worry about where Linux will put its bootloader or will it manage that on its own? I didn't mean software RAID, I meant fakeRAID.
I've recently had trouble reinstalling my Ubuntu system as I was getting various unusual errors as described in my old thread here. I thought it was probably something to do with my RAID-0 array which was pre-installed on my laptop from purchase being corrupted or something like that (if it's possible). I decided to simplify things for myself (not understanding RAID arrays much) so I just removed the RAID array and installed Windows and Ubuntu on the now separate hard disks. It worked fine.
I noticed quite a significant performance drop, however, with even Ubuntu boots taking longer than 30 seconds despite my laptop being both high-spec and only a few months old. Windows, as you can imagine, was dreadfully slow. I wasn't entirely convinced that this was entirely due to the loss of the RAID array - as even low-spec laptops with presumably no RAID arrays are supposed to boot Ubuntu in under 30 seconds apparently - but I read that RAID-0 arra
I have a RAID 5 array, md0, with three full-disk (non-partitioned) members, sdb, sdc, and sdd. My computer will hang during the AHCI BIOS if AHCI is enabled instead of IDE, if these drives are plugged in. I believe it may be because I'm using the whole disk, and the AHCI BIOS expects an MBR to be on the drive (I don't know why it would care).
Is there a way to convert the array to use members sdb1, sdc1 and sdd1, partitioned MBR with 0xFD RAID partitions?