CentOS 5 Hardware :: Fake Raid Versus Hardware Raid After A Hardware Failure?
Oct 12, 2010
Recently while using a Highpoint 2310 (raid 5) I lost the mother board and CPU. I had to reinstall Centos and found it needed to initialize the array to function. Total loss of date. Question: If I use a true hardware card (3ware 9650se) and experience a serious hardware loss or the C drive can the card be installed with the drives on a new motherboard and function without data loss even if the OS must be reinstalled.
View 4 Replies
ADVERTISEMENT
Nov 6, 2009
1. One of my hdds failed (sda) in software raid 1. I rma'd the hdd to western digital and got another one. Now do I have to format it before putting it in my centos server? If yes, how do I format?
2. Also since sda drive failed, I gotta mark sda as failed in raid. Then remove the sda hdd, and pop in the new hdd for sda? Or do I switch sdb to sda and put the new hdd in sdb's place?
3. After that add it to raid correct, then once raid rebuilds I have to do grub? Can grub be done via ssh only? or do I need to be at the datacenter or get kvm?
4. Last question, I've got a supermicro hotswap hdd case, so do I need to shutdown server while I replace the hdd's? I just want to be sure I do this correctly.
The following is the guide that I will be using, please look at it and let me know if that is the correct procedure: [URL]. Another thing, when the hdd (sda) failed I put it back into raid, but the hdd has bad sectors that is why m replacing it.
View 8 Replies
View Related
Jun 9, 2011
Following scenario: My server in some data center on a different continent with two disks and software raid 1.
One day I see that a disk failed (for example with /proc/mdstat). Of course I should replace the failed disk asap. Now that I think about it, I am not sure how. What should my email to the data center support guy mention to make sure that guy doesn't replace the wrong disk?
With hardware RAID it is very easy, because the controller usually has some kind of red LED indicator. But what about software raid?
View 8 Replies
View Related
Jul 16, 2010
Been attempting to install ubuntu on my Intel fake raid on a new PC.The installer sees the raid, but errors out:Quote:The ext4 file system creation in partition #1 of Serial ATA RAID isw_ehbgbddej_Volume0 (mirror) failed.This seem to be the EXACT issue described in this bug report:I read all the way down this bug report, and at the bottom it says:Quote:Launchpad Janitor on 2010-06-25Changed in parted (Ubuntu Lucid):status: Fix Committed → Fix ReleasedA few weeks ago, a fix was "committed" and "released". I had been working with an earlier 10.04 ISO, so I downloaded 10.04 from the web site today. Still the same problem.Maybe I'm just reading the bug report wrong? If it says the fix is committed and released, I would naively assume that it is fixed. Do I need to patch to apply the fix or something? Doesn't seem likely.
View 3 Replies
View Related
Feb 1, 2011
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
I was able to examine the disks though:
Code:
root@127.0.0.1:/etc# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
code....
Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
View 4 Replies
View Related
Jun 19, 2010
I installed OpenSuSe 11.2 x64 on my PC.It's not work will:After installation, the system auto login KDE GUI.but when I reboot(or shutdown and open) my computer. Message on screen meaning:"can't find bootable partition".My PC's hardwareMother Board: DFI 790FXB-M3H5(South Bridge: SB750)Hard Disk: WD 500G*2 Raid 0(via SB750 Chip function)Oh, I'm sorry, It's Fakeraid, not hardraid.
View 9 Replies
View Related
Aug 3, 2010
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
View 3 Replies
View Related
Aug 2, 2010
I ve got a SATA HDD which I use for storage connected to a (now quite old) RAID PCI card (HighPoint Rocket RAID 1520). Ive added another (blank) HDD, same brand and size to the PCI card. Ubuntu (10.04) can see both hard drives and access data. Id like to mirror (raid 1?) these HDDs. Looking around this forum Ive noticed quite a few people mentioning FakeRAID. Turns out that's not quite the same as software RAID. Given how cheap the 1520 was, I suspect it's FakeRAID rather than Hardware. Perhaps someone can confirm? Given Ubuntu can see both HDDs, would this mean it ll have the correct drivers to work with a hardware/fake raid? A common recommendation is to use mdadm to create a software RAID. How does this work for partitions accessed from multiple Operating Systems including Windows XP?
View 3 Replies
View Related
Jan 6, 2011
1st I am relatively new to linux (but not to *nix). I have 4 disks assembled in the following intel ahci bios fake raid arrays:
2x320GB RAID1 - used for operating systems md126
2x1TB RAID1 - used for data md125
I have used the raid of size 320GB to install my operating system and the second raid I didn't even select during the installation of Fedora 14. After successful partitioning and installation of Fedora, I tried to make the second array available, it was possible to make it visible in linux with mdadm --assembe --scan , after that I created one maximum size partition and 1 maximum size ext4 filesystem in it. Mounted, and used it. After restart - a few I/O errors during boot regarding md125 + inability to mount the filesystem on it and dropped into repair shell. I commented the filesystem in fstab and it booted. To my surprise, the array was marked as "auto read only":
[Code]...
and the partition in it was not available as device special file in /dev:
[Code]...
View 1 Replies
View Related
Oct 20, 2010
I've been, for some years, an happy user of Highpoint HPT374 based RAID cards, using RAID 5 with decent performances (constantly around 90 MB/sec in read, and 60 MB/sec in write ). Old Athlon mobo , with 2.6.8 kernel. Now the mobo is dead, so I've got an Asrock A330GC (dual proc with 4giga ram), installed Debian with kernel 2.6.26 and moved the controller and disks , and the performances have dropped to a painful 9 MB/sec in write, measured with dd of a huge file (interrupted with kill -USR1 [pidof dd]).
Read performances are still around 90MB/sec, using hdparm -t , repeated many times , figures are constantly around 90MB/sec. I suspect some libata issue , in old kernel the raid was seen as hdb, now is sdb, some driver(s) of the PATA disks may be responsible. I've used the driver, from Highpoint v2.19 , the latest driver is broken (causes kernel oops during format of raid), I've informed Highpoint.
Here some info :
hdparm -i /dev/sdb :
HDIO_GET_IDENTITY failed: invalid argument .
lspci :
00:00.0 Host bridge: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub (rev 02)
00:02.0 VGA compatible controller: Intel Corporation 82945G/GZ Integrated Graphics Controller (rev 02)
00:1b.0 Audio device: Intel Corporation 82801G (ICH7 Family) High Definition Audio Controller (rev 01)
00:1c.0 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 1 (rev 01)
00:1c.1 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 2 (rev 01) .....
View 2 Replies
View Related
Dec 10, 2009
I am going to be using CentOs 5.4 for a home storage server. It will be RAID6 on 6 x 1TB drives. I plan on using an external enclosure which is connected via two SFF-8088 cables (4 drives a piece). I am looking to try and find a non-RAID HBA which would support this external enclosure and allow to use standard linux software raid.
If this is not an option, I'd consider using a hardware based raid card, but they are very expensive. The Adaptec 5085 is one option but is almost $800. If that is what I need for this thing to be solid then that is fine, I will spend the money but I am thinking that software raid may be the way to go.
View 3 Replies
View Related
May 22, 2011
What is the difference between RAID versus LVM?
View 2 Replies
View Related
Feb 23, 2011
I am attempting to install Maverick on an older pc. It has (2) mirrored 153 GB SATA drives. I set aside 33GB for a linux partition, by shrinking the partition in Vista.
Choosing to "Install along side windows" is not an available option. I can only choose use who disk, or use empty area. I choose use the empty area, and the Ubiquity installer gets hung up when attempting to format the empty space as EXT4. When I scroll up on the output I notice some error messages referencing NTFS. When I use Gparted it shows a possible problem.
Quote:
Warning:
The device /dev/sda1 doesn't exist
ntfsresize v2.0.0 (libntfs 10:0:0
ERROR(2):Failed to check 'dev/sda1' mount state: No such
file or directory
Probably /etc/mtab is missing. It's too risky to continue. You
might try an another Linux ditro.
Unable to read the contents of this file system!Because of this some operations may be unavailable.
The following list of software packages is required for ntfs file system support: ntfsprogs.
I ran chkdsk /r on the Vista partiton, in case there was a problem there. The issue still persists.
View 2 Replies
View Related
Apr 28, 2009
I've just setup a new centos 5.3 server with a 3-disc raid 5 software RAID array. I've setup other software raid 5 arrays on this same hardware, whilst testing, and had no trouble... I only just installed 3 new drives and performed a new install from scratch.
Hardware is: 4800XP X2 64-bit, 2GB RAM, Albatron KI-690G mainboard with Marvel SATA controller (I think) - 4 ports.
SATA port 0 is system drive (OCZ vertex 32GB SSD)
SATA port 1 is Western Digital "green" 1TB 8MB cache SATA 2
SATA port 2 is Western Digital "green" 1TB 8MB cache SATA 2
SATA port 3 is Western Digital "green" 1TB 8MB cache SATA 2
Ports 1-3 in software raid MD0 (raid 5)
All of this was configured in the GUI setup, with LVM on top of software RAID 5 mounted on /var. Partition size is ~1.9TB
Trouble is I get all sorts of "hardware failure" messages at bootup and the MD driver reports it was only able to bring up 2 out of 3 drives in the RAID set... however the RAID set formatted fine during the setup?
Here is the relevent dmesg output...
View 2 Replies
View Related
Dec 7, 2010
I'm working on a server and noticed that the to RAID5 setup is showing 4 Raid devices but only 3 Total devices. It's on a fully updated CentOS 5 system that only has three SATA drives, as it can not hold anymore. I've done some researching but am unable to remove the fourth device, which is listed as removed. The full output of `mdadm -D /dev/md2` can be see below. I've never run into this situation before.Anyone have any pointers on how I can reduced the Raid Devices from 4 to 3? I have tried
mdadm /dev/md2 -r failed
mdadm /dev/md2 -r detached
but neither work and since there is no block device listed I'm not quite sure how to get things back in sync so it's only seeing the three drives.
/dev/md2:
Version : 0.90
Creation Time : Tue May 25 11:07:04 2010
Raid Level : raid5
[code]....
View 8 Replies
View Related
Jul 1, 2011
I'm currently trying to setup Slackware 13.7 on a server, using software RAID 1. I'm using the README_RAID.TXT document at the root of the Slackware disc as a reference. Anyway, here's what I have so far.
/dev/md1 -> /boot partition
/dev/md2 -> swap partition
/dev/md3 -> / partition
Code:
[root@raymonde:~] # fdisk -l /dev/sd{a,b}
Disk /dev/sda: 41.1 GB, 41110142976 bytes
255 heads, 63 sectors/track, 4998 cylinders, total 80293248 sectors
Units = sectors of 1 * 512 = 512 bytes
[code]....
I created an initrd image using mkinitrd -F, added an according stanza to /etc/lilo.conf and ran 'lilo' after that. Now I can boot on the vanilla huge kernel all right. But I can't seem to boot on the generic kernel. Whenever I try to do this, the boot process stops short on the following error message:
Code:
mount: mounting /dev/md3 on /mnt failed: Device or resource busy
ERROR: no /sbin/init found on rootdev
View 5 Replies
View Related
Jun 5, 2011
I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it.I then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says : Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output errormdadm: Notenough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:
root@warren-P5K-E:~# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
[code]....
View 2 Replies
View Related
May 22, 2009
I *had* a server with 6 SATA2 drives with CentOS 5.3 on it (I've upgraded over time from 5.1). I had set up (software) RAID1 on /boot for sda1 and sdb1 with sdc1, sdd1, sde1, and sdf1 as hot backups. I created LVM (over RAID5) for /, /var, and /home. I had a drive fail last year (sda).After a fashion, I was able to get it working again with sda removed. Since I had two hot spares on my RAID5/LVM deal, I never replaced sda. Of course, on reboot, what was sdb became sda, sdc became sdb, etc.So, recently, the new sdc died. The hot spare took over, and I was humming along. A week later (before I had a chance to replace the spares, another died (sdb).Now, I have 3 good drives, my array has degraded, but it's been running (until I just shut it down to tr y.
I now only have one replacement drive (it will take a week or two to get the others).I went to linux rescue from the CentOS 5.2 DVD and changed sda1 to a Linux (as opposed to Linux RAID) partition. I need to change my fstab to look for /dev/sda1 as boot, but I can't even mount sda1 as /boot. What do I need to do next? If I try to reboot without the disk, I get insmod: error inserting '/lib/raid456.ko': -1 File existsAlso, my md1 and md2 fail because there are not enough discs (it says 2/4 failed). I *believe* that this is because sda, sdb, sdc, sdd, and sde WERE the drives on the raid before, and I removed sdb and sdc, but now, I do not have sde (because I only have 4 drives) and sdd is the new drive. Do I need to label these drives and try again? Suggestions? (I suspect I should have done this BEFORE failure).Do I need to rebuild the RAIDs somehow? What about LVM?
View 6 Replies
View Related
Feb 2, 2010
Recently, one the SMART utility said that one of the drives had failed and another drive was about to fail. I downed the box and hooked them up to my windows machine to run sea tools on them (They are all seagate drives). Sea Tools said that the drives were fine, while ubuntu said they were failing/dead. Yesterday I decided to try to fix one of the drives in the raid. I turned the server off, took the failed drive out, and restarted. Of course the raid didn't work because only 2 of the 3 drives were there, however it had been working w/ only 2 of the 3 drives for a couple months now (I'm a lazy college student). I turned it back off and back on with the drive there just to see if I could get the raid up again, but I havn't been able to get it to go. So far I've tried:
Code:
mdadm --assemble /dev/md0 /dev/sd[b,c,d]
mdadm: no recogniseable superblock on /dev/sdb
mdadm: /dev/sdb has no superblock - assembly aborted
[code]....
I'm looking for a way to trick the raid into working with just 2 drives until I can warranty the seagate and buy an external 1.5 TB drive to use as another backup. how to remove the bad drive from the array and replace it with a fresh drive, without data loss.
View 3 Replies
View Related
Jun 18, 2010
I have a fileserver which is running Ubuntu Server 6.10. I had a RAID5 array consisting of the following disks:
Code:
/dev/sda1
/dev/sdb1
/dev/sdd1
/dev/md0 -
the raid drive for the above three disks. The sda1 disk has failed and the array is running on 2 of 3 disks
/dev/sdc (OS disk)
/dev/sde (new 2tb disk - unused)
/dev/sdf (new 2tb disk - unused)
My plan was to rebuild the array using the two new disks as RAID1. Would the best way to do this be to create a new RAID1 disk on /dev/md1 then copy all data over from /dev/md0? Also - this may sound stupid but since all 3 drives in md0 are identical i'm not sure physically which disk is bad. I tried disconnecting each disk one-by-one then rebooting but the system doesn't appear to want to boot without the bad drive connected. I've already failed the disk in the array with mdadm but i'm unsure of how to remove it properly.
View 3 Replies
View Related
Sep 6, 2010
Based on the reading I've done over the past 48 hours I think I'm in serious trouble here with my RAID 5 array. I got another 1 TB drive and added to my other 3 to increase my space to 3 TB...no problem.
While the array was resyncing...it got to about 40%, I had a power failure. So I'm pretty sure it failed while it was growing the array...not the partition. Next time I booted mdadm didn't even detect the array. I fiddled around trying to get mdadm to recognize my array, but no luck.
I finally got desperate enough to just create the array again...I knew the settings of my and had seen some people have success with this method. When creating it, it asked me if I was sure because the disks appeared to belong to an array already, but I said yes. The problem is when I created it, it created a clean array and this is what I'm left with.
Code:
/dev/md0:
Version : 00.90
Creation Time : Sun Sep 5 20:01:08 2010
Raid Level : raid5
Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
[Code]....
I tried looking for backup superblock locations using e2fsck and every other tool I could find, but nothing worked. I tried testdisk which says it found my partition on /dev/md0, so I let it create the partition. Now I have a /dev/md0p1, which won't let me mount it either. What's interesting is gparted reports /dev/md0p1 as the old partition size (1.82 TB)...the data has to still be there, right?
View 3 Replies
View Related
Sep 13, 2010
on my pc, that was running WinXP, I thought of installing Ubuntu. (I did install linux already a few times in the past years and use it on another couple of pcs) But something went wrong. This machine has 2 x 200MB maxtor drives, in raid 0 configuration, supported by the motherboard Nvidia chipset, and working well in Windows. When ran the live Ubuntu 10.04 cd, gparted was not able to access the drives in raid configuration, until I installed the mdadm and kpartx packages then the existing data became visible. So after that initial moment I thought all was ok and proceeded to install Lucid on the machine, dual booting with Windows. I did partition manually so that in my 400GB raid drive there is an 80GB NTFS partition with WinXP, a 90GB extended partition for Linux Ext4 and Swap and then a last NTFS 200GB partition for data. All went well, but now on restarting the computer nothing happens, nothing loads, Grub is not showing, and it looks like I cannot launch Linux nor Windows. All the data from WinXP and the Ubuntu installation seems to be on the disks but the pc is just not booting. I suppose the problem is with the raid configuration that is not handled properly during the installation, but is there anything that I can do now, apart from reinstalling Windows Xp or installing Ubuntu in a non raid configuration?
View 9 Replies
View Related
Dec 19, 2010
I went to setup my linux box and found that the OS drive had finally died. It was an extremely old WD raptor drive in a hot box full of drives so it was really only a matter of time before it just quit on me. Normally this wouldn't be such a big deal however I had just recently constructed an md RAID5 array of 3 1TB disks to act as an NFS mount for basically all of my important files. Maybe 2-3 weeks before the failure I had finished moving all of my most important stuff onto that array. Now I know that the array is intact. All the required data is sitting on those disks. Since only the OS level disk failed on me I should be able to get a new disk in there, reinstall ubuntu and then rebuild that array. how exactly do I go about doing that with mdadm? Do I create the array from the /dev character devices like when I initially built the array?
View 2 Replies
View Related
Apr 4, 2010
I have installed a Fedora Core 12 Linux system onto a RAID 1 file system. I now need a way of getting an notification if the disk fails. Is there an SNMP MIB that covers Intel RAID? I have done the searching but still the answer alludes me.
View 1 Replies
View Related
Jun 7, 2011
I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it. then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says :Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error
mdadm: Not enough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:
root@warren-P5K-E:~# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
[code]...
View 11 Replies
View Related
May 1, 2011
I just setup sendmail on my server to send emails and it works, now I would like to be able to get an email from mdadm if sometjhing was going wrong. I imagine most raid users have this feature setup.
Right now, I have 7 raid arrays and mdadm starts at boot time. Until now, I used Mr. Goblin's script (http://connie.slackware.com/~mrgoblin/files/rc.mdadm) (thanks Mr Goblin!) to monitor my arrays.
The script is started at boot time from rc.local. I created a small script in /usr/bin that send the following command to rc.mdadm giving me the status of the arrays:
Code:
/etc/rc.d/rc.mdadm status
and it works fine, but this requires me probing the arrays manually by calling the script from the command line. I would like to automate probing every 10 minutes or whatever and if a fault has been detected, I get an email.
[Code]...
View 14 Replies
View Related
Aug 27, 2010
UPDATE: decided to reinstall and run the partitioner to get rid of the raid. Not worth dealing with this since seems to be lower level as /dev/mapper was not listing any devices. Error 15 at grub points to legacy grub. So avoiding the problem by getting rid of raid for now. So ignore this post. Found a nice grub2 explanation on the wiki but didn't help this situation since probably isn't a grub problem. Probably is a installer failure to map devices properly when it only used what was already available and didn't create them during the install. I don't know, just guessing. Had OpenSuSE 10.3 64bit installed with software raid mirrored swap, boot, root. Used the alternate 64bit Ubuntu iso for installation. Since partitioning was already correctly setup and the raid devices /dev/md0,1,2 were recognized by the installer, I chose to format the partitions with ext3 and accept the configuration:
/dev/md0 = swap = /dev/sda1, /dev/sdb1 = 2Gb
/dev/md1 = boot = /dev/sda2, /dev/sdb2 = 256Mb
/dev/md2 = root = /dev/sda3, /dev/sda3 = 20Gb
Installation process failed at the point of installing grub. It had attempted to install the bootloader on /dev/sda2 and /dev/sdb2. I moved on since it would not let me fiddle with the settings and I got the machine rebooted with the rescue option on the iso used for installing. Now, I can see the root partition is populated with files as expected. dpkg will list that linux-image-generic, headers, and linux-generic are installed with other supporting kernel packages. grub-pc is installed as well. However, the /boot partition or /dev/md1 was empty initially after the reboot. What is the procedure to get grub to install the bootloader on /dev/sda2 and /dev/sdb2, which represent /dev/md1 or /boot?
Running apt-get update and apt-get upgrade installed a newer kernel and this populated the /boot partition. Running update-grub results in a "/usr/sbin/grub-probe: error: no mapping exists for 'md2'". grub-install /dev/md2 or grub-install /dev/sda2 gives the same error as well. Both commands indicate that "Autodetection of a filesystem module failed, Please specify the module with the option '--modules' explicitly". What is the right modules that need to be loaded for a raid partition in initrd? Should I be telling grub to use the a raid module?
View 1 Replies
View Related
Jul 18, 2011
I have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer!
So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.
View 2 Replies
View Related
Jun 29, 2011
migrate an installed Ubuntu system from a software raid to a hardware raid on the same machine? how would you go about doing so?
View 1 Replies
View Related
Mar 22, 2011
How long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?
View 1 Replies
View Related