General :: Migrate An Installed Ubuntu System From A Software Raid To A Hardware Raid?
Jun 29, 2011migrate an installed Ubuntu system from a software raid to a hardware raid on the same machine? how would you go about doing so?
View 1 Repliesmigrate an installed Ubuntu system from a software raid to a hardware raid on the same machine? how would you go about doing so?
View 1 RepliesI can find many articles in google about migration from RAID 1 to RAID 5. But what about RAID 5 to RAID 1. I have a root(/) partition in RAID 1 and want to migrate to RAID 1. This is in production systems.
View 2 Replies View RelatedI had done a new lucid install to a 1 TB RAID 1 array using the alternate CD a few weeks back. I messed up that system trying to some hardware working that lucid doesn't have drivers for yet, so I gave up on it and reinstalled to a single 80 GB disk that I now want to move over to the RAID array.
I moved all of the existing files on the array to a single folder, then copied all of the folders from the 80 GB disk over to the array with permissions and symlinks (minus the contents of /proc and /sys, which I created empty).
These are the commands I used:
Quote:
p -a -d -R -v -t /media/raid_array /b*
cp -a -d -R -v -t /media/raid_array /d*
cp -a -d -R -v -t /media/raid_array /e*
cp -a -d -R -v -t /media/raid_array /h*
[Code]....
I tried to change fstab to use the 689a... for root, but when I try to boot, it's still trying to open /dev/disk/by-uuid/412d...
So then I booted from the single disk again and chrooted into the array, then ran update-initramfs -u. I got 3 "grep: /proc/modules: No such file or directory" errors, and "cat: /proc/cmdline: No such file or directory"- so I created directory /proc/modules, created an empty file /proc/cmdline, and ran the initramfs update again. Then I tried to shut down, which hung (probably because I was doing all of this from a terminal window in Gnome), so I killed the power after a couple of minutes.
It's still trying to use /dev/disk/by-uuid/412d... to boot.
What am I missing? I assume I just have to change the UUID to mount as root, but I don't know how.
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
I was able to examine the disks though:
Code:
root@127.0.0.1:/etc# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
code....
Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I have two 1TB hard drives in a RAID 1 (mirroring) array. I would like to add a third 1TB drive and create a RAID 5 with the 3 drives for a 2TB system. I have ubuntu installed on a separate drive. Is it possible to convert my RAID 1 system to a RAID 5 without losing the data? Is there a better solution?
View 1 Replies View RelatedHere is my system: I have dell poweredge 1950 PERC 6 with 300 GB raid system. It has two disks of each 300GB RAID mirrored system. I have few applications and data that reached around 280GB. As you know, poweredge 1950 we can have only two disk.
They are not mission critical. Hence, I wanted to remove the raid system and use as a non-raid system. By doing it, The applications and data can grow upto 600GB. I do not want to loose the data and setup. I am not so clear about RAID system and its conversion.
how to migrate my whole server to larger hard drives (i.e. I'd like to replace my four 1TB's with four 2TB's, for a new total of 4TB instead of 2TB)... I'll post the output from everything (relevant) that I can think of in code tags below.
I'd like to end up with much larger /home and /public partitions. When I first set up raid and then LVM it seemed like it wouldn't be too hard once this day arrived, but I've had little luck finding help online for upgrades and resizing versus simply rebuilding from a failure. Specifically, I figure I have to mirror the data over to the new drives one at a time, but I can't figure out how to build the raid partitions on the new disks in order to have more space (yet mirror with the old drive that has a smaller partition)... don't the raid partitions have to be the same size to mirror?
Ubuntu Server (karmic) 2.6.31-22-server #65-Ubuntu SMP; fully updated
Code:
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md5 : active raid1 sdd5[1] sdc5[0]
968952320 blocks [2/2] [UU]
[Code].....
I have a box that doesn't have a Raid controller or a software raid running currently. I would like to make it a RAID 1. Since it seems there isn't any IDE RAID controllers hardly around, I have another HD that is the exact model as the drive currently in the box running CentOS. Can I some how add the second drive and get the box to mirror from here own out? The box gets really hot and I want to be ready for a HD failure.
View 3 Replies View RelatedI setup a HTPC about a month ago, and just expanded my storage by adding two 750GB drives in addition to my OS drive. I am using Ubuntu 9.10 as my OS and need help setting up a raid 1 on the two 750GB drives.
gparted shows the two drives as /dev/sdb and /dev/sdc
I have (had) Debian Testing running on a 250GB IDE hard drive, partitioned normally.
I also have 4x 1TB drives in a raid 5 using mdadm, and 2x 500GB drives in a raid 1 also with mdadm.
I put the two arrays in lvm using:
I then used "lvcreate" to make storage/backup 300GB, and the rest went to storage/media (approx. 2TB usable). I put an xfs filesystem on both and mounted them.
All was working fine until the system drive shorted out and died on me this morning. As far as I can tell, all my other drives and everything else is fine. I do a daily rsnapshot of the filesystem, which of course is residing on storage/backup (stupid, I know). So I have full backups of everything, but I'll have to put a new hard drive in and reinstall Debian before I can restore everything.
I've reinstalled before and simply reassembled mdadm arrays and remounted them before with no problems, but this is the first time I've used lvm, so I'm not sure what I have to do to restore everything. Is it as simple as reinstalling the system then doing a:
I have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer!
So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.
I have installed Ubuntu on my m1530 since 8.04 and currently dual boot Win7 and 10.10. I would like to dual boot on my PC, but I have run into a problem. I am not a pro at Ubuntu, but this problem I can not solve by reading forums like I have in the past.
I realize this is a common problem, but I have noticed people having success.
I have a M4A87TD EVO MB with two Seagate drives in Raid 0. (The raid controller is a SB850 on that MB) I use the raid utility to create the raid drive that Windows7x64 uses. I have 2 partitions and 1 unused space. Partition 1 is Windows, partition 2 is for media, and the remaining unused space is for Ubuntu.
I am running ubuntu-10.10-desktop-amd64 off a Cruzer 16GB flash drive that was installed via Universal-USB-Installer-1.8.1.4.
My problem like so many others is that when I load into Ubuntu, gparted detects two separate hard drives instead of the raid. I read that this is because kpartx is not installed on 10.10. I then went in LiveCD mode and downloaded kpartx from Synaptic Manager. Gparted still reported two drives. I opened terminal and run a few commands with kpartx. I received an error. (Forgive me I didn't write it down, but I believe it said something about a communication error. I will try again later and see.)
Currently I am reflashing the Cruzer with a persistence of 4GB. I am not familiar with this process, but I understand that my LiveCD boot will save information I download to it. I decided to try this method because I was going to install kpartx and reboot to see if this made a difference.
I am looking for any suggestions on a different method or perhaps someone to tell me that the raid controller or some hardware isn't supported. I did install ubuntu-10.10-alternate-amd64 on my flash drive, but fail to get past detecting my CD-ROM drive since it's not plugged in. If this method is viable, I will plug it in. I also watched the ..... video were a guy creates Raid 0 with the alternated CD, but it wasn't a dual boot and didn't use a raid controller from a MB.
How long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?
View 1 Replies View RelatedI am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
I would like to setup a CentOS file server with LVM and Raid1. Having 6 x 500GB drives, 4 x 1GB Ram and a Quad Core Cpu, I am considering to configure 3 hdd as LVM then raid 1 to the remaining 3 hdd's.
View 7 Replies View Relatedhow can I create RAID 1+0 using two drives (one is with data and second one is new). Is it possible to synchronize data drive with empty drive and create RAID 1+0 ?
View 3 Replies View RelatedHow to configure RAID 1 (hardware RAID 1 or Software RAID1 ) after installation of operating system : RED HAT ENTERPRISE LINUX 5.3 SERVER
View 9 Replies View RelatedIf I have a windows installed in raid-0, then install virtualbox and install all my linux os,s to virtualbox will they be a raid-0 install without needing to install raid drivers?
View 1 Replies View RelatedI am going to be using CentOs 5.4 for a home storage server. It will be RAID6 on 6 x 1TB drives. I plan on using an external enclosure which is connected via two SFF-8088 cables (4 drives a piece). I am looking to try and find a non-RAID HBA which would support this external enclosure and allow to use standard linux software raid.
If this is not an option, I'd consider using a hardware based raid card, but they are very expensive. The Adaptec 5085 is one option but is almost $800. If that is what I need for this thing to be solid then that is fine, I will spend the money but I am thinking that software raid may be the way to go.
I've got a couple of commercial NAS boxes and I'm wondering if they (ReadyNas duo, DLink DNS-323) or any other NAS is suitable for having their RAIDed disks moved to a software-based NAS. To be specific, I'm a big fan of the (largely) Debian-based Ubuntu. Can the aforementioned NAS drives be migrated to Ubuntu (e.g. using the mdadm Linux command)?
Secondly, is there any commercial NAS that can be migrated over? Incidentally, here is a link to somebody who succeeded in a migration:URL...My specific scenario I'd like to prepare for, is the eventual (sudden) death of one of the NAS motherboards.
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
I am going to setup a new Ubuntu 10.04 using RAID 1 soon. Installation will be via the alternate CD. Older distributions required manually installing Grub to the second drive, to boot if the first drive fails. I found different statements about how this is handled since 9.10.e.g.
Quote:
Install GRUB boot-loader on second drive (this step is not need if you use Ubuntu 9.10)
or
Quote:
installing GRUB to second hard drive depending on your distribution
> grub-install /dev/md0
or
> grub-install /dev/sda
> grub-install /dev/sdb
is Grub2 automatically installed in all RAID drives using alternate CD 10.04 like executing sort of "grub-nstall /dev/md0" during the installation ?
i have edge server. raid(5) have been installed but not able to check raid health. using command line able to.but using monitoring tool opsview, showing error -NRPE: Unable to read outpumachine Arch- 64 bit centos -5.2
package installed -MegaCli-2.00.15-1.i386.rpm
I am looking to convert a raid 1 server I have to raid 10. It is using software raid, currently I have 3 drives in raid 1. Is it possible to boot into centos rescue, stop the raid 1 array. Then create the raid 10 with 4 drives, 3 of which still has the raid 1 metadata, will mdadm be able to figure it out and resync properly keeping my data? or is there a better way to do it?
View 2 Replies View RelatedI've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
When I installed this system (Xubuntu 9.04 x64) several months ago, I had two identical SATA hard drives, but I didn't do a RAID1 mirror then because I didn't want to wipe out my old OS (FreeBSD) on the second drive in case I needed to retrieve something from it. So I installed Xubuntu on the first drive (sda) and for all that time, the second drive (sdb) has been running but unused, and fdisk showed that the FreeBSD partition was still there.
A couple weeks ago, my separate backup server failed, so as a short-term backup I did a 'dd if=/dev/sda of=/dev/sdb' to make an exact copy of my Xubuntu install on drive A upon drive B (the former FreeBSD drive), then mounted /dev/sdb1 on /mnt to make sure it worked successfully. So far so good, but I still didn't set up any RAID stuff.
Several days later, I needed to reboot because of a security upgrade, and when I logged out, the GUI froze up. Thinking not too much of it, I restarted the system, and it came up fine. But the next day, I discovered that I was missing several days worth of email and recent files. In fact, everything had been reverted back to a state from several days earlier --- I think from the day I copied the first drive to the second one. Files I created in the interim were gone, and files I deleted in the interim were back. It was as if the first drive was 'restored' from the second one without my knowledge.
Doing some testing now, I find that if I create a new file on /dev/sda1 and then mount /dev/sdb1, the file also exists on sdb1. It's as if they're acting as a RAID1 mirror, without my telling it to do so. Could it just decide to do RAID1 because it sees there are two identically partitioned drives? That seems dangerous. And if they were really in a RAID mirror, why would it let me mount them separately? It's very strange.
I don't mind if it's suddenly decided to do RAID, but I want to make sure it's not going to 'restore' a more current filesystem from an older one again, if that's indeed what happened.
I'm working on a server and noticed that the to RAID5 setup is showing 4 Raid devices but only 3 Total devices. It's on a fully updated CentOS 5 system that only has three SATA drives, as it can not hold anymore. I've done some researching but am unable to remove the fourth device, which is listed as removed. The full output of `mdadm -D /dev/md2` can be see below. I've never run into this situation before.Anyone have any pointers on how I can reduced the Raid Devices from 4 to 3? I have tried
mdadm /dev/md2 -r failed
mdadm /dev/md2 -r detached
but neither work and since there is no block device listed I'm not quite sure how to get things back in sync so it's only seeing the three drives.
/dev/md2:
Version : 0.90
Creation Time : Tue May 25 11:07:04 2010
Raid Level : raid5
[code]....
I am trying to create a RAID data drive for my system but I am having setting it up since I am a total linux noob.
The system has 3 physical HDD-s:
1. 320 GB (has functional Ubuntu 9.10 installation) attached to a PCI SATA card
2. 2TB on motherboard
3. 2TB on PCI SATA card
I want to create a software RAID1 of disks 2 and 3. So far I have used the Palimpsest Disk Utility:
- Created a GUID Partition table on both disks (2, 3)
- Used File -> New -> Software Array, made sure both my drives were included
- Once Palimpsest listed the RAID Drive as a Software RAID Array, I told it to create Ext3 filesystem on it
Well.. at least thats what I thought I did. At this point I have been able to mount the RAID drive and put files on it. However when I look at its information in Palimpsest, I am told that the drive is not partitioned. Both RAId components /dev/sda1 and /dev/sdc1 are reported to be in Sync, but the RAID Drive's own state is 'Running, Resyncing @ 45%' (and lowly growing).
My questions are: Is this a normal setup or did I do something incorrectly? Why is the drive reporting to have no partition? And howcome I can use it if it does not have a partition? I have found the command line based configurations to be a tad too confusing to follow, so I have tried to stick to graphical tools - is this a hopeless cause in Ubuntu or is it possible to achieve what I want to do without command line? I will list some info on my disks below - perhaps this offers more insight to those of you more familiar with Linux.
Code:
mindgamer@mind-server:~$ sudo lshw -C disk
[sudo] password for mindgamer:
*-disk:0
description: ATA Disk
product: WDC WD3200BEVT-0
vendor: Western Digital
[Code]...
As the title says, my raid system is very very slow (this is for raid5 6x1Tb SAMSUNG HD103UJ):
Code:
leo@server:~$ sudo hdparm -tT /dev/sdb
/dev/sdb:
Timing cached reads: 1094 MB in 2.00 seconds = 546.79 MB/sec
Timing buffered disk reads: 8 MB in 3.16 seconds = 2.53 MB/sec
It's impossible and I've tried every configuration (raid5,raid0,Pass Through), and I become nearly exactly the same speed (with restarts, so I'm sure I'm really talking with the volumes I've defined).
One can compare it with the system drive :
Code:
leo@server:~$ sudo hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 1756 MB in 2.00 seconds = 878.12 MB/sec
Timing buffered disk reads: 226 MB in 3.01 seconds = 75.14 MB/sec
Some hardware/software infos :
Code:
Raid card
Controller Name ARC-1230
Firmware Version V1.48 2009-12-31
Code:
Motherboard
Manufacturer: ASUSTeK Computer INC.
Product Name: P5LD2-VM DH
Code:
leo@server:~$ uname -a
Linux server 2.6.31-20-server #58-Ubuntu SMP Fri Mar 12 05:40:05 UTC 2010 x86_64 GNU/Linux
Code:
IDE Channels .....
I'm a bit lost now. I could change the motherboard, or some bios settings.
I tried to install new ubuntu on Intel raid 1 system but it said that:
Quote:
The ext4 file system creation in partition #1 of Serial ATA RAID isw_chibcceegh_Volume0 (mirror) failed.
My config is:
P5Q Pro
2x500 GB Seagate HDD
Intel Raid 1
Boot ubuntu from USB Drive (Wonder does this cause the problem?)