General :: RAID Disk Failure Interrupt Notification?

Apr 4, 2010

I have installed a Fedora Core 12 Linux system onto a RAID 1 file system. I now need a way of getting an notification if the disk fails. Is there an SNMP MIB that covers Intel RAID? I have done the searching but still the answer alludes me.

View 1 Replies


ADVERTISEMENT

Hardware :: Fedora 11 RAID 1 - Disk Failure - Boot From The Single Working Disk?

Oct 16, 2009

my Fedora 11 system is not starting anylonger. It stops with the message:

Code:

VFS: Can't find ext4 filesystem on dev dm-0

The system told me since a while, that a lot of the sectors of one disk of the (software) RAID compound are failed already. So tried to disconnect each of the disks and start them separately. Unfortunaltly this is not working (for one its is not working at all, the other wents the same far as with both), when I tried to recover the system with the Fedora DVD, it said no distribution found. I am quite new and do not know so much about linux system, so i do not know what further information you could need. Maybe it can be important, that both disks are encryped (the system wents so far, that I can type in the password).

View 2 Replies View Related

Ubuntu :: Gdu Notification Daemon- Disk Failure Is Imminent.Reallocated Sector Count Failing

Jul 31, 2011

After installing Ubuntu on my computer separately its giving "gdu notification daemon- Disk failure is imminent.Reallocated sector count failing.

View 2 Replies View Related

CentOS 5 :: Disk Failure And Software Raid 1?

Jun 9, 2011

Following scenario: My server in some data center on a different continent with two disks and software raid 1.

One day I see that a disk failed (for example with /proc/mdstat). Of course I should replace the failed disk asap. Now that I think about it, I am not sure how. What should my email to the data center support guy mention to make sure that guy doesn't replace the wrong disk?

With hardware RAID it is very easy, because the controller usually has some kind of red LED indicator. But what about software raid?

View 8 Replies View Related

Ubuntu :: Mdadm Raid 5 Single Disk Failure

Feb 2, 2010

Recently, one the SMART utility said that one of the drives had failed and another drive was about to fail. I downed the box and hooked them up to my windows machine to run sea tools on them (They are all seagate drives). Sea Tools said that the drives were fine, while ubuntu said they were failing/dead. Yesterday I decided to try to fix one of the drives in the raid. I turned the server off, took the failed drive out, and restarted. Of course the raid didn't work because only 2 of the 3 drives were there, however it had been working w/ only 2 of the 3 drives for a couple months now (I'm a lazy college student). I turned it back off and back on with the drive there just to see if I could get the raid up again, but I havn't been able to get it to go. So far I've tried:

Code:

mdadm --assemble /dev/md0 /dev/sd[b,c,d]
mdadm: no recogniseable superblock on /dev/sdb
mdadm: /dev/sdb has no superblock - assembly aborted

[code]....

I'm looking for a way to trick the raid into working with just 2 drives until I can warranty the seagate and buy an external 1.5 TB drive to use as another backup. how to remove the bad drive from the array and replace it with a fresh drive, without data loss.

View 3 Replies View Related

Ubuntu :: MDADM RAID 5 Disk Failure And Recovery?

Jun 18, 2010

I have a fileserver which is running Ubuntu Server 6.10. I had a RAID5 array consisting of the following disks:

Code:
/dev/sda1
/dev/sdb1
/dev/sdd1
/dev/md0 -

the raid drive for the above three disks. The sda1 disk has failed and the array is running on 2 of 3 disks

/dev/sdc (OS disk)
/dev/sde (new 2tb disk - unused)
/dev/sdf (new 2tb disk - unused)

My plan was to rebuild the array using the two new disks as RAID1. Would the best way to do this be to create a new RAID1 disk on /dev/md1 then copy all data over from /dev/md0? Also - this may sound stupid but since all 3 drives in md0 are identical i'm not sure physically which disk is bad. I tried disconnecting each disk one-by-one then rebooting but the system doesn't appear to want to boot without the bad drive connected. I've already failed the disk in the array with mdadm but i'm unsure of how to remove it properly.

View 3 Replies View Related

Ubuntu :: Rebuild Md RAID Array After OS Disk Failure?

Dec 19, 2010

I went to setup my linux box and found that the OS drive had finally died. It was an extremely old WD raptor drive in a hot box full of drives so it was really only a matter of time before it just quit on me. Normally this wouldn't be such a big deal however I had just recently constructed an md RAID5 array of 3 1TB disks to act as an NFS mount for basically all of my important files. Maybe 2-3 weeks before the failure I had finished moving all of my most important stuff onto that array. Now I know that the array is intact. All the required data is sitting on those disks. Since only the OS level disk failed on me I should be able to get a new disk in there, reinstall ubuntu and then rebuild that array. how exactly do I go about doing that with mdadm? Do I create the array from the /dev character devices like when I initially built the array?

View 2 Replies View Related

General :: Delivery Status Notification (Failure)

Aug 9, 2010

OMG, something went wrong with my email notifications in nagios. Just when i thought i have successfully set it up..Suddenly, just so suddenly it failed. x_x I got the mail below:

Code:

Delivery to the following recipient failed permanently:

root@server.com

Technical details of permanent failure:Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 #5.1.0 Address rejected root@server.com (state 14).

Received: by 10.114.109.1 with SMTP id h1mr16785732wac.203.1281263841711;
Sun, 08 Aug 2010 03:37:21 -0700 (PDT)
Return-Path: <layleng91@gmail.com>
Received: from ubuntu (bb119-74-182-12.singnet.com.sg [119.74.182.12])

[code]....

View 15 Replies View Related

Ubuntu :: Email Delivery Status Notification Failure

Feb 11, 2010

My EeePC used to be called 'machine', but some time ago I renamed it 'deimos'. There's been no problem with this for ages; but today I got an email 'Delivery Status Notification (Failure)' - because of a failed job, Anacron had tried to send a message to 'root@machine'. I've checked /etc/hosts and /etc/hostname, and in both locations the computer's name is correctly configured as 'deimos'. Where else should I look?

View 1 Replies View Related

General :: Tools For Checking Hard Disk Failure?

Oct 5, 2010

I am just wondering if there are any tools for checking the life of the hard disk. I had my hard disk for 4 years. And now I think it is having some problems.Is there any tools I can use to check the condition of the hard disk?

View 2 Replies View Related

General :: Raid - RAID1 With Only One Disk

Oct 27, 2010

I have a hypothetical situation in which I installed my operating system using a RAID1 mirror. At some point I decided that this setup was overkill, my machine isn't system critical, I value doubling my storage space more than speedy recovery, I'm doing routine backups, etc...

Short of backing up my system volume and repartitioning, or otherwise starting over, is there a way I can reconfigure my RAID1 array to only expect one disk so that mdadm no longer reports a Degraded state?

View 3 Replies View Related

General :: Setup RAID 5 And One Spare Disk

Aug 4, 2010

I want to build a 6xSATA RAID 5 system with on of the disks as spare disk. I think this give me a chance of 2 of 6 disks failing without losing data. I am right?
Hardware: Intel ICH10R
First I will creat a 3xSATA RAID 5, after I will add the spare disk and after that I will add the others disks. This is what I think I should do.

Step 1:
Create RAID Device
Code:
mdadm --create --verbose /dev/md0 --metadata 1.2 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1
I read that "--metadata 1.2" is the best option. It is true?
Create filesystem on the RAID device

Using this method of calculation:
* chunk size = 128kB (for RAID 5)
* block size = 4kB (recommended for large files, and most of time)
* stride = chunk / block = 128kB / 4k = 32kB
* stripe-width = stride * ( (n disks in raid5) - 1 ) = 32kB * ( (5)- 1 ) = 32kB * 4 = 128kb
Then:
Code:
mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=128 /dev/md0

Step 2:
Add spare-disk
Code:
mdadm --add /dev/md0 /dev/sdd1
Is this enough?

Step 3:
Adding disks:
Code:
mdadm --add /dev/md0 /dev/sde1
mdadm --grow /dev/md0 --raid-devices=4
fsck.ext3 /dev/md0
resize2fs /dev/md0

View 1 Replies View Related

General :: RAID Disk Inactive After Reboot

Aug 12, 2010

I just configured two raid setups but after a reboot they are not mounted and seem to be inactive.

md127 = sde1, sdf1 and sdi1 (raid 5)
md0 = sda1 and sdh1 (raid 0)
Code:
root@server /]# cat /proc/mdstat
Personalities :
md127 : inactive sdf1[1](S) sde1[2](S)
78156032 blocks
md0 : inactive sda1[0](S)
488382977 blocks super 1.2
unused devices: <none>

Code:
[root@server /]# fdisk -l | grep "Disk /"
Disk /dev/sda: 500.1 GB, 500107862016 bytes
Disk /dev/sdb: 80.0 GB, 80026361856 bytes
Disk /dev/sdc: 122.9 GB, 122942324736 bytes
Disk /dev/sdd: 160.0 GB, 160041885696 bytes
Disk /dev/sde: 40.0 GB, 40020664320 bytes
Disk /dev/sdf: 40.0 GB, 40020664320 bytes
Disk /dev/sdg: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdh: 251.0 GB, 251000193024 bytes
Disk /dev/sdi: 40.0 GB, 40020664320 bytes
Disk /dev/sdj: 500.1 GB, 500107862016 bytes

Code:
[root@server /]# cat /etc/mdadm.conf
DEVICE /dev/sdi1 /dev/sdf1 /dev/sde1 /dev/sda1 /dev/sdh1
ARRAY /dev/md127 UUID=5dc0cf7a:8c715104:04894333:532a878b auto=yes
ARRAY /dev/md0 UUID=65c49170:733df717:435e470b:3334ee94 auto=yes

As you can see they now show up as inactive. And for some reason sdi1 and sdh1 are not even listed. What can I do to get them back? To make matters worse I placed some important data on them, and even if I was clever enough to keep an extra copy on another drive, guess which drive that was? So, I need to get them activated as is (at least so I can get the data of them) before I can rebuild them from scratch. I'm running Mandriva 2010.1 and rated tehm using the built in disk partitioner.

View 14 Replies View Related

General :: Recover Raid - LVM On Non System Drives After System Failure

Jan 19, 2011

I have (had) Debian Testing running on a 250GB IDE hard drive, partitioned normally.

I also have 4x 1TB drives in a raid 5 using mdadm, and 2x 500GB drives in a raid 1 also with mdadm.

I put the two arrays in lvm using:

I then used "lvcreate" to make storage/backup 300GB, and the rest went to storage/media (approx. 2TB usable). I put an xfs filesystem on both and mounted them.

All was working fine until the system drive shorted out and died on me this morning. As far as I can tell, all my other drives and everything else is fine. I do a daily rsnapshot of the filesystem, which of course is residing on storage/backup (stupid, I know). So I have full backups of everything, but I'll have to put a new hard drive in and reinstall Debian before I can restore everything.

I've reinstalled before and simply reassembled mdadm arrays and remounted them before with no problems, but this is the first time I've used lvm, so I'm not sure what I have to do to restore everything. Is it as simple as reinstalling the system then doing a:

View 4 Replies View Related

General :: Where Is The Missing Disk Space On Software Raid

Nov 3, 2010

Purchased (4) 2TiB Drives (actual disk space) and created a RAID5 array expecting to have 6TB of useable disk space, however actual useable space is 5.46TiB.

So, the question is where did the disk space go?

First off, I can say for certainty the disks actual useable is verified at 2TB each have mounted and formated on a non-linux system (OSX).

Disks - 2TB Per disk, Tested HFS, Actual 2TB Useable
root@server:/server# fdisk -l 2>/dev/null | egrep "sd[hijk]" | grep Disk
Disk /dev/sdh: 2000.4 GB, 2000398934016 bytes
Disk /dev/sdj: 2000.4 GB, 2000398934016 bytes
Disk /dev/sdk: 2000.4 GB, 2000398934016 bytes

[Code]....

View 2 Replies View Related

General :: Debian Software RAID 1- Boot From Both Disk

Mar 15, 2011

I newly installed debian squeeze with software raid. The way I did was, as also given in this thread.

- I have 2 HDD with 500 GB each. For each of them, I created 3 partitions (/boot, / and swap)
- I selected the hard drive and created a new partition table
- I created a new partition that was 1GB. I then specified to use the partition as a Physical Volume for RAID. and used for /boot and enabled bootable.
- Created another partition, which is of 480 GB, and then specified to use the partition as a Physical Volume for RAID. and used for /.
- Created another partion and used for swap

Then RAID configuration:
Through Configure RAID menu -> create MD device ->
(2 for the number of drives, 0 for spare devices)
Next select the partitions you want to be members of /dev/MD0. I selected /dev/sda1 and /dev/sdb1 (for /boot)
Next select the partitions you want to be members of /dev/MD1. I selected /dev/sda6 and /dev/sdb6 (for /)
And no RAID for swap partitions

'Finish Partitioning and write changes to disk' --> Finish the rest of the install like normal. Everything is ok now, except I am not sure how to test my raid config. When I pull the power of the HDD, it only boots from one disk. I read in some forum that I may have to install GRUB manually on the other. In Debian Squeeze, there is no grub command. Not sure how to make my software raid bootable from both disk. I configured /boot partitions of both disks to be boot=yes. Not sure whether that is ok.

View 2 Replies View Related

General :: No Automatic Rebuild Of RAID 5 After Replacing Bad Disk

Dec 12, 2009

I have a 5 disk raid 5 array that is composed of SATA A:0,1; SATA B: 0,1, and SATA C:0, and one of the disks (SATA A:0) recently went bad on me. I have an ICP raid controller that is about 5 years old. I replaced SATA A:0. After rebooting, I went into the controller and verified that it saw the disk in the hard-disk info section...there I noticed that in the "status" section, that the SATA C:0, SATA B:1 disks were listed as being "in array", the SATA A disks were blank, and the SATA B:0 disk was listed as "fragment". When I go into the "repair array" section, the controller tells me that there are no arrays that are in failure, error, or need to be rebuilt.

This puzzles me, as I thought the controller would know that the array needs to be rebuilt after replacing the disk and I don't see a way to initiate a rebuild. If I just let the server boot after replacing the disk, then I get back that there are the correct number of disks in the raid 5 and that it is ready, however, the screen then goes blank and I get a blinking cursor and the system seems to hang. There are no activity lights on any of the drives associated with the raid 5, which makes me think that the system is not rebuilding the array at this point.

View 3 Replies View Related

General :: MDADM Error: Creating A RAID 1 RAM Disk?

Apr 7, 2011

I am trying to create a Raid 1 ram disk. Below are the commands I used:

[root@abidbodal dev]# mke2fs -m 0 /dev/ram8
[root@abidbodal dev]# mount /dev/ram8 /mnt/rd8
[root@abidbodal dev]# mke2fs -m 0 /dev/ram9

[code]....

View 3 Replies View Related

Ubuntu Servers :: HW RAID Disk Shows Up In Fstab But Not In /dev/disk/by-uuid?

Jun 28, 2010

I have an SiI hardware SATA RAID card, with two 500GB disks in mirrored RAID configuration. When I first plugged them in and set it up, things seemed to work ok, but on boot the raid controller told me that the RAID needed rebuilding, and it would happen automatically after POST. So I didn't worry about it, and the drive mounted fine, and it's been that way for years. I just went in and manually on-line rebuilt the RAID in the controller's BIOS, and now when I boot into Ubuntu, both disks show up in fdisk, but neither show up in /dev/disk/by-uuid. Am I missing something?

View 9 Replies View Related

General :: Mount A Single RAID 1 Disk / Partition As Ext3?

Jul 7, 2011

I need to copy data from a single HD, which used to be part of a Linux RAID 1. I've googled around, but can't find any clue how to mount partitions from this single HD.

Background: The HD comes from a linux based NAS box Synology DS207+. The NAS uses ext3 as filesystem. Both NAS disks are fine, but the other NAS hardware is dead and not worth repairing or replacing.

View 1 Replies View Related

Ubuntu Installation :: Migrate Working Single Disk System To Existing RAID Array Using Disk UUIDs

Aug 1, 2010

I had done a new lucid install to a 1 TB RAID 1 array using the alternate CD a few weeks back. I messed up that system trying to some hardware working that lucid doesn't have drivers for yet, so I gave up on it and reinstalled to a single 80 GB disk that I now want to move over to the RAID array.

I moved all of the existing files on the array to a single folder, then copied all of the folders from the 80 GB disk over to the array with permissions and symlinks (minus the contents of /proc and /sys, which I created empty).

These are the commands I used:

Quote:

p -a -d -R -v -t /media/raid_array /b*
cp -a -d -R -v -t /media/raid_array /d*
cp -a -d -R -v -t /media/raid_array /e*
cp -a -d -R -v -t /media/raid_array /h*

[Code]....

I tried to change fstab to use the 689a... for root, but when I try to boot, it's still trying to open /dev/disk/by-uuid/412d...

So then I booted from the single disk again and chrooted into the array, then ran update-initramfs -u. I got 3 "grep: /proc/modules: No such file or directory" errors, and "cat: /proc/cmdline: No such file or directory"- so I created directory /proc/modules, created an empty file /proc/cmdline, and ran the initramfs update again. Then I tried to shut down, which hung (probably because I was doing all of this from a terminal window in Gnome), so I killed the power after a couple of minutes.

It's still trying to use /dev/disk/by-uuid/412d... to boot.

What am I missing? I assume I just have to change the UUID to mount as root, but I don't know how.

View 2 Replies View Related

CentOS 5 Hardware :: Fake Raid Versus Hardware Raid After A Hardware Failure?

Oct 12, 2010

Recently while using a Highpoint 2310 (raid 5) I lost the mother board and CPU. I had to reinstall Centos and found it needed to initialize the array to function. Total loss of date. Question: If I use a true hardware card (3ware 9650se) and experience a serious hardware loss or the C drive can the card be installed with the drives on a new motherboard and function without data loss even if the OS must be reinstalled.

View 4 Replies View Related

Ubuntu :: MDADM Raid 5 - OS Failure?

Jun 5, 2011

I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it.I then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says : Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output errormdadm: Notenough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:

root@warren-P5K-E:~# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90

[code]....

View 2 Replies View Related

CentOS 5 :: Software RAID 1 HDD Failure

Nov 6, 2009

1. One of my hdds failed (sda) in software raid 1. I rma'd the hdd to western digital and got another one. Now do I have to format it before putting it in my centos server? If yes, how do I format?

2. Also since sda drive failed, I gotta mark sda as failed in raid. Then remove the sda hdd, and pop in the new hdd for sda? Or do I switch sdb to sda and put the new hdd in sdb's place?

3. After that add it to raid correct, then once raid rebuilds I have to do grub? Can grub be done via ssh only? or do I need to be at the datacenter or get kvm?

4. Last question, I've got a supermicro hotswap hdd case, so do I need to shutdown server while I replace the hdd's? I just want to be sure I do this correctly.

The following is the guide that I will be using, please look at it and let me know if that is the correct procedure: [URL]. Another thing, when the hdd (sda) failed I put it back into raid, but the hdd has bad sectors that is why m replacing it.

View 8 Replies View Related

General :: Scsi RAID Jbod And Arrays - Disk Utilization And The Corresponding Low Data Transfer

Jul 6, 2010

So I have a system that is about 6 years old running Redhat 7.2 that is supporting a very old app that cannot be replaced at the moment. The jbod has 7 Raid1 arrays in it, 6 of which are for database storage and another for the OS storage. We've recently run into some bad slowdowns and drive failures causing nearly a week in downtime. Apparently none of the people involved, including the so-called hardware experts could really shed any light on the matter. Out of curiosity I ran iostat one day for a while and saw numbers similar to below:

[Code]...

Some of these kinda weird me out, especially the disk utilization and the corresponding low data transfer. I'm not a disk IO expert so if there are any gurus out there willing to help explain what it is I'm seeing here. As a side note, the system is back up and running it just runs sluggish and neither the database folks nor the hardware guys can make heads or tails of it. Ive sent them the same graphs from iostat but so far no response.

View 1 Replies View Related

Server :: Dual Drive Failure In RAID 5

May 22, 2009

I *had* a server with 6 SATA2 drives with CentOS 5.3 on it (I've upgraded over time from 5.1). I had set up (software) RAID1 on /boot for sda1 and sdb1 with sdc1, sdd1, sde1, and sdf1 as hot backups. I created LVM (over RAID5) for /, /var, and /home. I had a drive fail last year (sda).After a fashion, I was able to get it working again with sda removed. Since I had two hot spares on my RAID5/LVM deal, I never replaced sda. Of course, on reboot, what was sdb became sda, sdc became sdb, etc.So, recently, the new sdc died. The hot spare took over, and I was humming along. A week later (before I had a chance to replace the spares, another died (sdb).Now, I have 3 good drives, my array has degraded, but it's been running (until I just shut it down to tr y.

I now only have one replacement drive (it will take a week or two to get the others).I went to linux rescue from the CentOS 5.2 DVD and changed sda1 to a Linux (as opposed to Linux RAID) partition. I need to change my fstab to look for /dev/sda1 as boot, but I can't even mount sda1 as /boot. What do I need to do next? If I try to reboot without the disk, I get insmod: error inserting '/lib/raid456.ko': -1 File existsAlso, my md1 and md2 fail because there are not enough discs (it says 2/4 failed). I *believe* that this is because sda, sdb, sdc, sdd, and sde WERE the drives on the raid before, and I removed sdb and sdc, but now, I do not have sde (because I only have 4 drives) and sdd is the new drive. Do I need to label these drives and try again? Suggestions? (I suspect I should have done this BEFORE failure).Do I need to rebuild the RAIDs somehow? What about LVM?

View 6 Replies View Related

Ubuntu Servers :: RAID 5 Failure While Increasing Size?

Sep 6, 2010

Based on the reading I've done over the past 48 hours I think I'm in serious trouble here with my RAID 5 array. I got another 1 TB drive and added to my other 3 to increase my space to 3 TB...no problem.

While the array was resyncing...it got to about 40%, I had a power failure. So I'm pretty sure it failed while it was growing the array...not the partition. Next time I booted mdadm didn't even detect the array. I fiddled around trying to get mdadm to recognize my array, but no luck.

I finally got desperate enough to just create the array again...I knew the settings of my and had seen some people have success with this method. When creating it, it asked me if I was sure because the disks appeared to belong to an array already, but I said yes. The problem is when I created it, it created a clean array and this is what I'm left with.

Code:
/dev/md0:
Version : 00.90
Creation Time : Sun Sep 5 20:01:08 2010
Raid Level : raid5
Array Size : 2930279808 (2794.53 GiB 3000.61 GB)

[Code]....

I tried looking for backup superblock locations using e2fsck and every other tool I could find, but nothing worked. I tried testdisk which says it found my partition on /dev/md0, so I let it create the partition. Now I have a /dev/md0p1, which won't let me mount it either. What's interesting is gparted reports /dev/md0p1 as the old partition size (1.82 TB)...the data has to still be there, right?

View 3 Replies View Related

Ubuntu :: Install Failure - Nvidia Raid And No Boot?

Sep 13, 2010

on my pc, that was running WinXP, I thought of installing Ubuntu. (I did install linux already a few times in the past years and use it on another couple of pcs) But something went wrong. This machine has 2 x 200MB maxtor drives, in raid 0 configuration, supported by the motherboard Nvidia chipset, and working well in Windows. When ran the live Ubuntu 10.04 cd, gparted was not able to access the drives in raid configuration, until I installed the mdadm and kpartx packages then the existing data became visible. So after that initial moment I thought all was ok and proceeded to install Lucid on the machine, dual booting with Windows. I did partition manually so that in my 400GB raid drive there is an 80GB NTFS partition with WinXP, a 90GB extended partition for Linux Ext4 and Swap and then a last NTFS 200GB partition for data. All went well, but now on restarting the computer nothing happens, nothing loads, Grub is not showing, and it looks like I cannot launch Linux nor Windows. All the data from WinXP and the Ubuntu installation seems to be on the disks but the pc is just not booting. I suppose the problem is with the raid configuration that is not handled properly during the installation, but is there anything that I can do now, apart from reinstalling Windows Xp or installing Ubuntu in a non raid configuration?

View 9 Replies View Related

Server :: MDADM Raid 5 Array - OS Drive Failure?

Jun 7, 2011

I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it. then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says :Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error

mdadm: Not enough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:

root@warren-P5K-E:~# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90

[code]...

View 11 Replies View Related

Slackware :: Automating Raid Failure Detection On Slack 13.1?

May 1, 2011

I just setup sendmail on my server to send emails and it works, now I would like to be able to get an email from mdadm if sometjhing was going wrong. I imagine most raid users have this feature setup.

Right now, I have 7 raid arrays and mdadm starts at boot time. Until now, I used Mr. Goblin's script (http://connie.slackware.com/~mrgoblin/files/rc.mdadm) (thanks Mr Goblin!) to monitor my arrays.

The script is started at boot time from rc.local. I created a small script in /usr/bin that send the following command to rc.mdadm giving me the status of the arrays:

Code:
/etc/rc.d/rc.mdadm status

and it works fine, but this requires me probing the arrays manually by calling the script from the command line. I would like to automate probing every 10 minutes or whatever and if a fault has been detected, I get an email.

[Code]...

View 14 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved