Ubuntu :: Adding A 2nd HDD And Creating Software Raid-1?
Apr 7, 2010
I have installed ubuntustudio 9.10 on my dell dimension 1100 desktop and im trying to setup raid-1 because i'm constantly worried that my hard disk is going to fail. i have 2 drives. one 40gb and one 80gb. so, i created a 40gb partition on my 80gb drive and i want to raid this partition with the 40gb drive. is this possible? and am i right in thinking that i can raid everything including /boot?
Setup : internet -- VirtualBoxHost -- WIFI-switch -- clients VirtualBoxHost hosts several Virtual Machines and has 2 interfaces. Wan-interface is not used by VirtualBoxHost, but is directly mapped to the Virtual Machine Endian Firewall.
What I want: Access to VM's for the Wifi-clients AND access from WiFi-clients to VirtualBoxHost.
What I have tried: - create bridge - add LAN-interface to bridge - create tap0 - add tap0 to bridge
Virtual Machine uses tap0 Wifi-clients are connected via LAN-interface, via bridge, to VM's. But my host-machine is not accessible from a WLAN-client and also I can not access a VirtualMachine from my host-machine. A WLAN-client can connect to a VirtualMachine without problem. I think with my bridge the local LAN and the VM's are now connected. How to put also my host-machine in the LAN? As a LAN-client.
I have recently installed a Asus M4A77TD Pro system board which supports raid.
I have 2 x 320gb sata drives I would like to setup raid-1 on. so far i have configured the bios to raid-1 for drives, but when installing Ubuntu 10.04 from the cd it detects the raid configuration but fails to format.
When I re-set all bios settings to standard sata drives ubuntu installs and works as normal but i have just 2 x drives without any raid options. I had this working in my previous setup but thats because i had the o/s on a sepreate drive from the raid and was able to do this within Ubuntu.
I have the newest Ubuntu installed on my machine. It currently has a 160GB sata drive. I just bought two shiny new 2TB drives that I want to RAID as 4TB. How do I go about adding these two hard drives even though install is complete? I want the 4TB assigned to my /home directory.
I have been making the soundtrack with audacity and using various other programs to create a "slide show video" then adding the soundtrack. I would like to know which programs you would recommend for doing this easily and quickly. At the moment I would really like something I could add photos to set timing and transition effects and add a sound track to each invidual image.
I have implemented LVM to expand the /home partition. I would like to add 2 more disks to the system and use raid 5 for those two disks plus the disk used for /home. Is this possible? If so, do I use type fd for the two new disks and use type 8e for the existing LVM /home disk? Or do I use type fd for all of the raid disks?
I have a system that has the following partitions:
Now SDC is a new drive I added. I would like to pool that new drive with the raided drives to give myself more space on my existing system (and structure). Is this possible since my raid already has data on it?
a server that was running a hardware isw raid on the system (root) disk. This was working just fine until I started getting sector errors on one of the disks. So, I shutdown the system and removed the failing drive and installed a new drive (same size). On reboot I went in to the intel raid setup and it did show the new drive and I was able to set it to rebuild the raid. So, continuing the reboot everything came up just fine except the raid 1 on the system disk. I have tried many times to get the system to rebuild the raid using dmraid, but to no avail it would not start a rebuild. In order to get the system back up and make sure that the disk was duplicated I was able to 'dd' the working disk to the new disk that was installed.At present when I look at the system it does not show up with a raid setup on the system disk ( this comprises the entire 1TB disk with w partitions sda1 as / and sda2 as swap).Problem:I have decided to forego the intel raid and just use mdadm. I have a test system setup to duplicate (not the software, but the disk partitions) the server setup.
Code: [root@kilchis etc]# fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Everything is fine until here but after reboot the device wont mount on /orac it says special device not available i found that that md02 device is not in active state i tried deleting it and recreating it but no use still it wont persist a reboot
I have created software raid 5 configurations on the second harddrive its working fine and i have edited fstab file for auto mounting when it reboot but when i reboot the computer raid doesn't work i have to re-create the arrays by typing "mdadm --create" command again and mount again manually ,is there anywhere i can do this once without retyping the commands again after rebooting and i am also using redhat 5
We have a server with RAID 0 with 4 hard disks on it each 250 GB. Linux kernel must find one hard disk named: /dev/sda with 1TB capacity. right? And also we have 2 partitions on sda: sda1 and sda2. We want to add another partition but we don't have enough space.
Now the problem: If we add another hard disk and run Code: fdisk -l Will the /dev/sda space incremented automatically so we can add new partitions or we must do something?
We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode: dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer! So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.
I have installed Ubuntu on my m1530 since 8.04 and currently dual boot Win7 and 10.10. I would like to dual boot on my PC, but I have run into a problem. I am not a pro at Ubuntu, but this problem I can not solve by reading forums like I have in the past.
I realize this is a common problem, but I have noticed people having success.
I have a M4A87TD EVO MB with two Seagate drives in Raid 0. (The raid controller is a SB850 on that MB) I use the raid utility to create the raid drive that Windows7x64 uses. I have 2 partitions and 1 unused space. Partition 1 is Windows, partition 2 is for media, and the remaining unused space is for Ubuntu.
I am running ubuntu-10.10-desktop-amd64 off a Cruzer 16GB flash drive that was installed via Universal-USB-Installer-1.8.1.4.
My problem like so many others is that when I load into Ubuntu, gparted detects two separate hard drives instead of the raid. I read that this is because kpartx is not installed on 10.10. I then went in LiveCD mode and downloaded kpartx from Synaptic Manager. Gparted still reported two drives. I opened terminal and run a few commands with kpartx. I received an error. (Forgive me I didn't write it down, but I believe it said something about a communication error. I will try again later and see.)
Currently I am reflashing the Cruzer with a persistence of 4GB. I am not familiar with this process, but I understand that my LiveCD boot will save information I download to it. I decided to try this method because I was going to install kpartx and reboot to see if this made a difference.
I am looking for any suggestions on a different method or perhaps someone to tell me that the raid controller or some hardware isn't supported. I did install ubuntu-10.10-alternate-amd64 on my flash drive, but fail to get past detecting my CD-ROM drive since it's not plugged in. If this method is viable, I will plug it in. I also watched the ..... video were a guy creates Raid 0 with the alternated CD, but it wasn't a dual boot and didn't use a raid controller from a MB.
How long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
I have two 1TB hard drives in a RAID 1 (mirroring) array. I would like to add a third 1TB drive and create a RAID 5 with the 3 drives for a 2TB system. I have ubuntu installed on a separate drive. Is it possible to convert my RAID 1 system to a RAID 5 without losing the data? Is there a better solution?
how can I create RAID 1+0 using two drives (one is with data and second one is new). Is it possible to synchronize data drive with empty drive and create RAID 1+0 ?
If I have a windows installed in raid-0, then install virtualbox and install all my linux os,s to virtualbox will they be a raid-0 install without needing to install raid drivers?
I am going to be using CentOs 5.4 for a home storage server. It will be RAID6 on 6 x 1TB drives. I plan on using an external enclosure which is connected via two SFF-8088 cables (4 drives a piece). I am looking to try and find a non-RAID HBA which would support this external enclosure and allow to use standard linux software raid.
If this is not an option, I'd consider using a hardware based raid card, but they are very expensive. The Adaptec 5085 is one option but is almost $800. If that is what I need for this thing to be solid then that is fine, I will spend the money but I am thinking that software raid may be the way to go.
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
I am looking to convert a raid 1 server I have to raid 10. It is using software raid, currently I have 3 drives in raid 1. Is it possible to boot into centos rescue, stop the raid 1 array. Then create the raid 10 with 4 drives, 3 of which still has the raid 1 metadata, will mdadm be able to figure it out and resync properly keeping my data? or is there a better way to do it?
I have a box that doesn't have a Raid controller or a software raid running currently. I would like to make it a RAID 1. Since it seems there isn't any IDE RAID controllers hardly around, I have another HD that is the exact model as the drive currently in the box running CentOS. Can I some how add the second drive and get the box to mirror from here own out? The box gets really hot and I want to be ready for a HD failure.
I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
I'm working on a server and noticed that the to RAID5 setup is showing 4 Raid devices but only 3 Total devices. It's on a fully updated CentOS 5 system that only has three SATA drives, as it can not hold anymore. I've done some researching but am unable to remove the fourth device, which is listed as removed. The full output of `mdadm -D /dev/md2` can be see below. I've never run into this situation before.Anyone have any pointers on how I can reduced the Raid Devices from 4 to 3? I have tried
Consider the following setup: Ubuntu system installed on a separate SSD for speed. An ubuntu software RAID array consisting of X number of physical HDD's for storage (RAID6 or RAID10). RAID setup is done during system install. If I suffer a total crash of the SSD and loose my system, will I be able to, using a new system disk, "reconnect" to the RAID array even if the "mothersystem" of the software RAID is lost? If yes, are there any particular config- or system files I need to backup to be able to rescue the array or will it just be recognized "out-of-the-box" when reinstalling ubuntu?
i have cretaed RAID on one of my server RAID health is ok but its shows warning. so what could be the problem. WARNING: 0:0:RAID-1:2 drives:153GB:Optimal Drives:2 (11528 Errors)