Server :: Mdadm Software Raid1 Failed Disk Detection Too Long

Jul 22, 2011

I have SLES10-SP3 running on an Intel SR1600URHS board with 3 hot-swap SATA disks configured using mdadm as Raid1 with hot spare. If I pull one of the active disks, all file i/o will stop for about 2.5 minutes after which it will start again and the raid array will be rebuilt using the spare disk. Is there any way I can reduce this 2.5 minutes of inactivity? I've tried setting /sys/block/sdX/device/timeout and /sys/block/sdX/device/retries to 1 for all disks, but this hasn't made any difference. The output from messages is:

12:11:56: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2 frozen
12:11:56: ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 cdb 0x1e data 0
12:11:56: res 40/00:03:00:00:20/00:00:00:00:00/b0 Emask 0x4 (timeout)

[code]....

View 1 Replies


ADVERTISEMENT

Server :: Mdadm Cannot Grow Raid1 Over Lvm?

Mar 31, 2011

I've 2 servers (xen1 and xen2 - their hostnames) with perversion configuration below: Each server have 4 SATA disks, 1 Tb each.

16 Gb ddr3
debian squeeze x64 installed:
root@xen2:~# uname -a
Linux xen2 2.6.32-5-xen-amd64 #1 SMP Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux

Storage configuration: Former 256 Mb + 32 Gb of 2 of 4 disks are used as raid1 devices for /boot and swap respectively. The rest of space, 970 Gb on all 4 sata disks are used as raid10. There is LVM2 installed over that raid10. Volume group is named xenlvm (that servers are expected to use as xen 4.0.1 hosts, but the story is not about Xen troubles). / , /var, /home are located on logical volumes of small size (just found out I got mixed up with lv names and partitions, but that's not the problem, I think):

[Code]...

View 3 Replies View Related

Ubuntu Servers :: Boot From Raid1 (mdadm) + Lvm

Aug 11, 2010

intending to set up an all-in-one server, i threw in the ubuntu server 10.04 (amd64) cd. during the text-install, i set up the device-topology below, and it worked.

[Code]....

then i tested my raid by hot-pulling off the sda wire (ouch). worked fine, system still worked, and it also managed rebooting from the left sdb (which of course showed up being sda, lacking the first drive). now i am trying to recover this pre-crash state. adding the first disk (showing up as sdb), i can add it to md0 and let it start syncronizing for 2 hours. but... i can?t boot anymore with the recovered first disk being sda...

at first, booting got stuck in an initrd-prompt after complaining it couldn?t find my sys-logical volume. after a lot of trial and error i don?t even get any complaints, just a black screen which would let me wait for a boot for weeks... so, my system does not boot from my first disk, whether i plug in the second or not. my second disk still boots. my last attempt to get booting fine again has been: zero sda?s first and last gigabyte to kill any ids duplicate sdb?s first cylinder to sda to make it bootable reinitialize sdb?s part.table using command o in fdisk for a new disk-id recreate sda1 partition add sda1 to md0

View 1 Replies View Related

General :: RAID1 Resync Slow Using Mdadm?

Mar 11, 2010

I am running Kernel 2.6.18-128.el5 on a 64bit quad core machine with 8GB RAM. Using "mdadm" I setup a RAID1 array between two Western Digital 1.5TB drives. The problem is that the resync is running VERY slow. Here is a current status.

[root@royalflush shared]# cat /proc/mdstat Personalities : [raid1]
md1 : active raid1 hdc5[1] hda5[0]
1304493952 blocks [2/2] [UU]
[=>] resync = 6.2% (81592192/1304493952) finish=4280156.0min speed=4K/sec
unused devices: <none>

View 1 Replies View Related

General :: Recover A RAID1 Array Using Mdadm

May 12, 2010

I'm looking to recover a RAID1 array hopefully using mdadm. Ive not really used Linux much befor but I'm keen to learn to get my data back. Basically one of the disks in my Maxtor Shared Storage II (2x500GB sata) died and I could do with either rebuilding the array or getting the data off another way.

I have a spare machine I could use for recovery process. It has a spare drive but its only 120Gig, I also have a bigger 320gig disk but thats IDE not SATA. Do I need to purchase another 500GB sata drive or can I use either of my spares? If i do need to buy a new drive could I use a 1TB or 1.5TB or will it have to be 500? Next question is what is that best version of linux to use, I have knoppix 6.2 and Ubuntu (not sure on version) already. I noticed that mdadm isn't installed by default on Ubuntu.

View 1 Replies View Related

Software :: Moved To Mdadm Raid1 And Now Having Grub?

Nov 27, 2010

Posted this on the centos forum too, but I might get better attention here. I just moved my centos server to a mdadm raid1 array. I went partially after this guide: [URL].. What I did was to boot up a livecd and made three partitions on both of my empty disks, one for / one for swap and one for /vz (it's an openvz server). Made those partitions into seperate raid1 arrays and then rsync-ed everything from the old disk to the new partitions.

After I had moved everything I did chroot into the new raid array and edited both grub config files and fstab, according to the guide.

[Code]...

I have managed to run the system on the raid1 disks when using super grub2 disk off a cd, but it has it's own grub and can boot any distro, so I can see that the system is working fine, except for grub. I have tried installing grub both from a livecd (ubuntu 64bit) and when booted into the raid1 array, but it gives the same results as stated above.

View 2 Replies View Related

General :: Creating A RAID1 Partition With Mdadm On Ubuntu?

Jan 28, 2010

I'm trying to set up a RAID1 partition on my Ubuntu 9.10 workstation.On this dual-boot system, Ubuntu is running from a separate drive (/dev/sdc - an SSD that is quite small, which is why I need more disk space). Besides that, there are two traditional 500 GB hard drives, which have Windows 7 installed (I want to keep the Windows installation intact), and about half of the space unallocated. This space is where I want to set up a single, large RAID1 partition for Linux.

(This, to my understanding, would be software RAID, whereas the Windows partitions are on hardware RAID - I hope this isn't a problem... Edit: See Peter's comment. I guess this shouldn't be a problem since I see both drives separately on Linux.)On both disks, /dev/sda and /dev/sdb, I created, using fdisk, identical new partitions of type "Linux raid autodetect" to fill up the unallocated space.

Device Boot Start End Blocks Id System
/dev/sda1 1 10 80293+ de Dell Utility
/dev/sda2 * 11 106 768000 7 HPFS/NTFS

[code]....

But so is "Device or resource busy" when trying to create the RAID array. Quite strange.

Update: Could the device mapper have something to do with this? How do /dev/mapper and dmraid relate to all this mdadm stuff anyway? Both provide software RAID, but.. differently? Sorry for my ignorance here. Under /dev/mapper/ there are some device files that, I think, somehow match the 3 Windows RAID partitions (sd{a,b}1 through sd{a,b}3). I don't know why there are four of these arrays though.

$ ls /dev/mapper/
control isw_dgjjcdcegc_ARRAY1 isw_dgjjcdcegc_ARRAY3
isw_dgjjcdcegc_ARRAY isw_dgjjcdcegc_ARRAY2

View 2 Replies View Related

Ubuntu Servers :: Interpreting Mdadm RAID1 Status?

Feb 7, 2011

I have a RAID1 array, where mdadm states that one of the disks is "removed." Naturally, I assume one of the drives has failed. The mdadm --detail command tells me that the sda drive has failed. However, further inspection from the mdadm -E /dev/sdb1 command says that sdb1 disk has been removed. I am a bit confused. Can someone clarify which drive is failed? Am I misreading the command outputs?

Code:
sudo fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes

[Code]...

View 3 Replies View Related

Ubuntu Servers :: Mdadm Raid1 Doesn't Finish Syncing?

Nov 14, 2010

One of the hard drives in my server failed the other day, backups saved the day and downtime was only a few hours, but when setting up the new drive I went ahead and migrated to software RAID, in the hopes it may give me less downtime in the future when a drive fails. It all went rather well, but my main root partition won't finish syncing for some reason.

sda was the original drive with sda4 as /, sda1 as /boot, and sda2 as swap. sdb was the drive that failed and was replaced with the new drive. So I set up sdb with the same partitions of sda, added it to a RAID1 array, copied files from sda, and reboot to md4 as /, md1 as /boot, and md2 as swap. I added the sda partitions to the array, and the sync went off without a hitch on md1 and md2, md4 progresses well, but after a few hours /proc/mdstat just shows this:

Code:
root@d668:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb2[1] sda2[0]
9767424 blocks [2/2] [UU]

[Code]...

View 3 Replies View Related

Server :: Softraid With Mdadm - Cannot Access Disk Anymore

Jun 22, 2011

seems like I killed my softraid. First, my initial setup: 2x 1.5TB SATA disks -> raid1 with mdadm -> lvm -> multiple LVs as luks partition This worked for a while now, even though I made a bad mistake; I created the raid out of the whole disks (instead of creating partitons on them). I didnt notice because it worked...

Now one disk failed, but I could still access the other disk which I moved to another server. mdadm recognized it during boot, after vgscan --mknodes; vgchange -ay I saw the luks partitions in /dev/mapper/ and could mount them via luksOpen. Went well several times, I did not use the disk anymore to avoid killing it also.

Just today when I wanted to move the stuff to my new raid, this way wont work anymore

First of all, dmesg reports a wrong size (500G instead of 1.5T)

Code:
[ 1.943127] sd 3:0:0:0: [sdc] 976817134 512-byte logical blocks: (500 GB/465 GiB)
[ 1.943153] sd 3:0:0:0: [sdc] Write Protect is off
[ 1.943155] sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00

[Code].....

What can I try to get my data back?

View 7 Replies View Related

OpenSUSE Install :: Mdadm - Change Spare To Active In RAID1 Array?

Aug 7, 2011

I'm convinced that mdadm is going to be the death of me. I've wasted numerous hours on this so far without luck.

OpenSuse 11.4 on an old Supermicro box, creating a software RAID1 array across 2 x IDE 500GB disks. Creating /dev/md0 as a 250MB partition across /dev/sda1 and /dev/sdd1 for /boot, another 465GB partition across /dev/sda2 and /dev/sdd2 as an LVM partition to hold volumes for the various other OS filesystems. After the initial installation and configuration there were a series of mishaps with faulty IDE cables that had drives failing to show up at boot. Somehow, /dev/sdd2 got configured to array /dev/md1 as a spare drive. And nothing I've done so far gets it to show up as an active drive.

The obvious step of failing the partition, removing it, then adding (or re-adding) will bring it back as a spare. I've tried roughly a dozen different permutations of those same steps. The latest was to 'dd if=/dev/zero of=/dev/sdd2' to clear the partition. Thought this might be the trick - after the zero, mdadm -E /dev/sdd2 reported 'no superblock' and no md1 configuration.

So 'mdadm --add /dev/md1 /dev/sdd2' and it still comes back as a spare. Here is mdadm -D /dev/md1

/dev/md1:
Version : 1.0
Creation Time : Sat Jul 9 10:26:01 2011
Raid Level : raid1
Array Size : 488119160 (465.51 GiB 499.83 GB)
code....

I can't stop this array, the OS is running from there. I can't easily boot from CD to repair, all IDE ports have disks attached.

Does anyone have an incantation to promote a spare to active?

View 2 Replies View Related

CentOS 5 :: Ex3fs On Mdadm/raid1 Freeze/lock-ups On Heavy Write?

Feb 27, 2011

I've faced the problem with server freeze on heavy write.

System

CentOS 5.5 x64_86 with latest updates and kernel (2.6.18-194.32.1). Also tried 2.6.18-194.26.1 and 2.6.37-2 from ELRepo with the same results.
CPU: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
Memory: 3 x 2Gb DDR3.
HDDs: 2 x Western Digital WDC WD1002FBYS-02A6B0

[Code].....

View 19 Replies View Related

Ubuntu :: File System Check Failed A Long Is Being Saved /var/long/fsck/checkfs

Jan 9, 2010

just start Ubuntu 9.04 said: File system chek failed a long is beging saved /var/long/fsck/checkfs if that location is writable Please repair the file systmen manually A maintenance shell will now be started Ctr+ D terminate this shell and resume system boot. Give root password for maintenance or type Control +D to continue. I did Ctr+D , and after login said , that can not find /home. I starte with the live cd:

[Code]....

View 9 Replies View Related

Hardware :: Raid1 Mdadm Repair Degraded Array With Used Good Hard Drive?

Jun 27, 2009

I have a used but good harddrive which I'd like to use as a replacement for a removed harddrive in existing raid1 array. mdadm --detail /dev/md00 0 0 -1 removed1 8 17 1 active sync /dev/sdb1I thought I needed to mark the removed drive as failed but I cannot get mdadm set it to "failed". I issue mdadm --manage /dev/md0 --fail /dev/sda1But mdadm response is:mdadm: hot remove failed for /dev/sda1: no such device or addressI thought I must mark the failed drive as "failed" to prevent raid1 from trying to mirror in wrong direction when I install my used-but-good disk. I want to reformat the good used drive first right? I believe I must prevent raid array from automatically try to mirror in the wrong direction.

View 7 Replies View Related

Server :: Creating Backup Disk Image Of RAID 1 Array (MDADM)?

Oct 27, 2010

We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode:
dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.

View 1 Replies View Related

CentOS 5 Hardware :: Hard Disk Crash / Data Recovery In RAID1 Webhosting Server

Feb 1, 2011

We have had a hardisk crash in our RAID1 webhosting server running CentOS5 and Plesk. We first realized something was wrong when our main site didn't load but showed MySQL errors. We then found out that the system was in read-only state. Something that also happened the day before yesterday, but we could fix with a FSCK. Then the system worked well til around 18 hours later when it crashed with the same sympoms. So, we rebooted the server and wanted to do a filesystem check again. But the HDD wouldnt even load. It was gone. Unfortunatelly it was not realized that the second disk in the system was also not working any more for some time now. Fortunatelly we had our main site backed up externally though. So we could re-install a fresh box and mounted the two drives to the system. We checked the harddisk. One is practically empty (the older one), the other has almost only files in 'lost + found' but these are all "numbered", no real filenames or so.

View 2 Replies View Related

Fedora :: Tool For Hard Disk Detection

Jul 16, 2009

In these days my hard disk is weird.I can see in my pc that the red light (when read/write process to the disk happens) remain on and Linux crash, it seems that the hard disk has problems.Therefore, exits a tool according with your experience that let me see the status of all my hard disk?

View 5 Replies View Related

General :: Raid - RAID1 With Only One Disk

Oct 27, 2010

I have a hypothetical situation in which I installed my operating system using a RAID1 mirror. At some point I decided that this setup was overkill, my machine isn't system critical, I value doubling my storage space more than speedy recovery, I'm doing routine backups, etc...

Short of backing up my system volume and repartitioning, or otherwise starting over, is there a way I can reconfigure my RAID1 array to only expect one disk so that mdadm no longer reports a Degraded state?

View 3 Replies View Related

Ubuntu :: Mdadm Array Gone After Failed Install?

Mar 30, 2010

I tryed to install ubuntu 10.04 using the beta alternative install cd.

Everything went fine until the partitioning section.

I choose manual partitioning and all my existing partitions were detected correctly included my 2 mdadm raid0 arrays.

I choose md0 as my / partition and choose to format the partition

I choose md1 as my /home partition as choose to keep the data

When I choose to continue and write the changes to disk the install started to create an ext4 partition on md0, the installer then stopped with an error that the kernel could not reread the partition table.

I aborted the installation at this point.

Now I can not access either of my arrays.

I have booted a livecd and installed mdadm. When I checked /etc/mdadm/mdadm.conf my existing arrays were already listed.

Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

[Code].....

View 3 Replies View Related

Ubuntu :: MDADM - How To Determine Which Drive Has Failed

Aug 7, 2011

I know this is probably a very generic linux question, but since I am using ubuntu - I thought it safer to ask here. I have jumped into the deep end of linux - and I am afraid that I will be forced to swim sooner rather than later.

Let me start at the beginning - I am and probably will be a windows fan for a long time - let me not list the reasons - or else you guys will probably hang met out to dry - the one thing I have discovered - is that windows sucks in generating a software RAID - especially the RAID 5 that i was looking for any case - after loosing plenty data via windose - I decided to attempt linux/ ubuntu. I must say - so far so good.

I used this excellent guide: [URL] and must say that the raid is performing admirably - I am currently busy adding/growing the 12th 1Tb drive onto the RAID, and no issues so far(some other major WOW advantages i have noticed... like speed writing too and reading from the RAID.. )

See below MDstat outputcat proc mdstat:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdl1[11] sdj1[10] sdb1[1] sdm1[6] sdk1[2] sdi1[8] sdh1[0] sdg1[3] sdf1[4] sde1[9] sdd1[5] sdc1[7]
9767599360 blocks super 0.91 level 5, 64k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU]
[==========>..........] reshape = 50.7% (495864576/976759936) finish=1482.0min speed=5407K/sec
unused devices: <none>

My questions :
If one drive fails on the array(for example SDK1) - how the heck do i determine which physical hardware device it is that has failed? (without compromising the other data - yes unfortunately I cant afford to backup 11TB of data - personal server). I don't have space in the box for a mouse - not even talking about a hot spare drive - thus adding the backup drive before removing the faulty drive is rather difficult - but if that's the only option I will have to keep with that as everybody know RAID5 is only 1 drive backup - so partly I would like to solve the issue as quick as possible -without having to resort to disconnecting one drive at a time to determine which is which. If the drive assignments ( SDA/SDB/SDC) is constant

What is the most intuitive/fast way to determine that a faulty drive exist in the array? - i.e. is there some sort of GUI solution for MDADM that will tell me the moment that a drive has turned faulty? - The box is currently not on the internet -meaning notification via email is not possible. Is there a non-destructible way to convert the RAID-5 to RAID-6? (I would rather sacrifice 1TB of free space - for peace of mind) - and RAID6 will make troubleshooting a bit easier since 3 drives will have to fail before data-loss.

View 9 Replies View Related

General :: Mdadm Cannot Remove Failed Drive?

Jan 17, 2010

i am setting up a software raid6 for the first time. To test the raid i removed a drive from the array by popping it out of the enclosure. mdadm marked the drive as F and everything seemed well. From what i gather the next step is to remove the drive from the array (mdadm /dev/md0 -r sdf), when i try this i receive the error:

mdadm: cannot find /dev/sdf: No such file or directory

That is true, when i plugged the drive back in the machine now recognizes it as /dev/sdk. My question is how do i remove this non-existent failed drive from my array as i was able to re-add it just fine as /dev/sdk with mdadm /dev/md0 -a /dev/sdk

Also, is there any way to define a drive based on id or something similar to the same drive name to avoid this?

Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] sdf[10](F) sde[3] sdd[2] sdc[1] sdb[0]
13674601472 blocks level 6, 64k chunk, algorithm 2 [9/8] [UUUU_UUUU]
[>....................] recovery = 2.7% (54654548/1953514496) finish=399.9min speed=79132K/sec

[Code].....

View 2 Replies View Related

Fedora :: Resync Two Raid1 Hard Disk In 12?

Jun 9, 2010

I have two 80 GB IDE hard disk. I have create raid1 partition in both drive using [URL] ink. raid is working fine. But i have copy some data on one hard disk (md0) but this data is not autometically copy in second hard disk(md1). I want when data is write on one hard disk, this data autometically write in second hard disk.

View 11 Replies View Related

Fedora :: RAID1 - Installing Grub On 2nd Disk?

Dec 7, 2010

Just finished installing F14 with RAID1 setup for 2 hdd (SATA''s). Entire drives are mirrored, including SWAP. As I had done in the past, I was planning on installing grub on MBR of 2nd hdd. In prep for this I did the following to locate the grub setup files:

grub> find /grub/stage1
find /grub/stage1
(hd0,1)

[code]....

I was surprised, expected to get (hd0,0) & (hd1,0), not (hd0,1) & (hd1,1)

running "fdisk -l" I get:

Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes

[code]....

MBR is the first 512 bytes of the drive. Each partition has a boot sector. In my case grub stage 1 is on 2nd partition of sda & 2nd partition of sdb. What i dont understand is how grub stage 1 can be on sda2 & sdb2, since I am assuming that sda1 & sdb1 would be the first partitions of the drives & therefore contain the MBR's. Maybe this might be because sda1 & sdb1 are SWAP partitions?

View 5 Replies View Related

Software :: Converting From RAID1 To A Single Disk?

Apr 5, 2011

I recently had an issue where one of my RAID1 configured drives died on a Debian box. By die, I mean after post, I am presented with a black screen and flashing cursor, nothing more. I booted into a knoppix shell and did:

mdadm --assemble /dev/md0 --run -u <UUID of the only working drive> /dev/sda1

I could then 'mount /dev/sda1 /mnt/' and see all the contents as I should be able to. However, I cannot boot from this device by itself for some reason. I have reinstalled grub 'grub-install --recheck --rootdirectory=/mnt /dev/sda1' #something like that, can't remember the exact --rootdirectory command without the switch. I restarted, with the failed drive unplugged, and again, I'm faced with the black screen, flashing cursor.

View 1 Replies View Related

Ubuntu Servers :: Convert Mdadm 6 Disk Raid5 To 5 Disk Raid5?

Jun 30, 2011

I know you can fail and then remove a drive from a RAID5 array. This leaves the array in a degraded state.

How can you remove a drive and convert the array to just a regular, clean array?

View 9 Replies View Related

Ubuntu :: MDADM RAID 5 Failed But Disks Are Still Present?

Jun 7, 2010

I just had a whole 2TB Software RAID 5 blow up on me. I rebooted my server, which i hardly ever do and low and behold i loose one of my raid 5 sets. It seems like two of the disks are not showing up properly.. What i mean by that is the OS picks up the disks, but it doesnt see the partitions.

I ran smartct -l on all the drives in question and they're all in good working order.

Is there some sort of repair tool i can use to scan the busted drives (since they're available) to fix any possible errors that might be present.

Here is what the "good" drive looks like when i use sfdisk:

Quote:

sudo sfdisk -l /dev/sda
Disk /dev/sda: 121601 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/sda1 0+ 121600 121601- 976760001 83 Linux
/dev/sda2 0 - 0 0 0 Empty

[Code]....

View 2 Replies View Related

General :: Difference Between Raid1 And Ordinary Hard Disk?

Jun 8, 2011

Is there a difference between raid1 and ordinary hard disk?

View 11 Replies View Related

Ubuntu :: MDADM Brand New Disk Failures

Jun 19, 2010

I currently have a Raid0 setup with two disks and I have no problems whatsover. I currently bought 3 brand new 1B drives and I set up Raid5. I created the array and everything works fine. I formatted the array and mounted it. When I tried to copy 750gb of data over my new array, it copies fine at some point and it fails. After reboot, I am seeing the array inactive. I tried everything and could not get the raid back active status. I have removed the array and recreated again, formatted again. When I started copying my files over, exactly same thing happened.

View 3 Replies View Related

Software :: Mdadm - Raid0 Array Appear With Only One Disk

Dec 1, 2010

When I set up Ubuntu 10.10 I had only one hdd around so I installed my system with the idea that I will add the 2nd hdd for raid1 later on. Last weekend I wanted to add the hdd, but discovered, that ubuntu created a raid0 array. So I went on and tried different things: removing the 1st hdd from the raid0 array, create a raid1 with two disks, and so on... I finally could syncronize both disks but after a reboot the raid0 array appeared again with only one disk. Now I know, I should have written the mdadm.conf and fstab files... My last tries resulted in a missing superblock. Here is the story:

[Code].....

View 1 Replies View Related

Ubuntu Servers :: Converting Single Disk LVM To RAID1 LVM (grub)

May 5, 2010

I have an Ubuntu Server on (8.10) running under Citrix XenServer (though that shouldn't make a difference).

I installed on a single disk:

xvda1 - 200 MB - /boot
xvda2 - 9.8 GB - LVM (ubuntu-base)

The LVM is:

swap - 1.0 GB
root - 8.8 GB - /

I have successfully gotten this converted to RAID1 by adding a new drive (xvdb) and following the Debian howtoforge article [URL]

What I have not been able to do, is get grub working properly.

If I fail xvdb and reboot the system, everything comes up and I can reboot and run.

If I fail xvda and reboot the system, XenServer gives me a bootloader error. i.e.: no grub

If someone has done this, can they tell me what grub commands to run to get a successful boot of the primary disk fails?

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved