Ubuntu Servers :: Starting Degraded Raid 5 Array?

Jun 11, 2010

so my servers 7 hds in raid 5 all was working well until one of them died. The HD that died sort of works it can read like half a file also freezes on the benchmark test in disk utility. Unfortunate when i take it out on boot it says. The drive for /media_kbt is not ready or present press s to skip or m for manual recovery. I hit s and then go to disk utility. But i can't start or add disks to the array.

Here is me trying to do random stuff

Code:
administrator@3dslice-host:~$ sudo mdadm --stop /dev/md0
[sudo] password for administrator:
mdadm: metadata format 00.90 unknown, ignored.
mdadm: stopped /dev/md0
administrator@3dslice-host:~$ sudo mdadm --add /dev/md0 /dev/sda1
mdadm: metadata format 00.90 unknown, ignored.

[Code]...

View 2 Replies


ADVERTISEMENT

Ubuntu Servers :: RAID-6 Cannot Start Degraded Array

Jun 26, 2011

Ubuntu Server 11.04 i386. I've used linux on and off for years but only in small doses, so I'm really just at newbie level. I was running an Openfiler NAS, but decided to give Ubuntu+Webmin a try. And up 'til now I've been happy with progress. I have set up a RAID-6 array using 5 x 1TB SATA drives. I've ensured that the array is in a "clean" state, and now I want to do some failure testing. The problem occurs when I remove one of the drives in the array. I shutdown, remove a drive, then boot up. The array wont start at all, and comes up with this error during boot:

Quote:

the disk drive for /mnt/raidvol1 is not ready yet or not present
Continue to wait; or Press S to skip mounting or M for manual recovery

If I wait, nothing happens. Obviously the RAID array should start in degraded mode, but it fails to mount at all. When I press "M" to go into manual recovery and type "mount -a" I get the response:

Quote:

mount: special device /dev/RAIDVG1/RAIDLV1 does not exist

I have set BOOT_DEGRADED=true in /etc/initramfs-tools/conf.d/mdadm without success. If I reconnect the disconnected drive, the array works fine, and is in a clean state.

View 9 Replies View Related

Ubuntu Servers :: RAID-5 Recovery (spare/active) / Degraded And Can't Create Raid ,auto Stop Raid [md1]?

Feb 1, 2011

Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.

Now the array is (logically) no longer able to start:

mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]

I was able to examine the disks though:

Code:
root@127.0.0.1:/etc# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
code....

Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.

View 4 Replies View Related

Software :: RAID 5 Array Not Assembling All 3 Devices On Boot Using MDADM - One Is Degraded

Aug 31, 2010

I have been having this problem for the past couple days and have done my best to solve it, but to no avail. I am using mdadm, which I'm not the most experienced in, to make a raid5 array using three separate disks (dev/sda, dev/sdc, dev/sdd). For some reason not all three drives are being assembled at boot, but I can add the missing array without any problems later, its just that this takes hours to sync. Here is some information:

[Code]....

View 11 Replies View Related

Ubuntu Servers :: 10.04 LTS Degraded Software RAID5 Array With LVM Won't Boot?

Jul 5, 2010

I also get sent to a Busybox (initramfs) shell with no text editor and don't know how to copy all the error messages and post them here. If there is a way, let me know. I've typed it out in the meantime:

Code:
md0 : inactive sdxxxx
Attempting to start the RAID in degraded mode...
mdadm: CREATE user root not found
mdadm: CREATE group disk not found

[Code].....

This is with a 3 disk RAID5 array. I turned off the system, pulled out a drive, and started it back up. Fresh install, all I've done so far is apt-get update and upgrade.

View 4 Replies View Related

Ubuntu Servers :: Creation Of RAID-0 Array In Disk Utility Resulting In Smaller Than Expected Array?

Sep 27, 2010

I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).

The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:

160 + 250 + 250+ 750 + 250 +200 + 200 + 250 + 320 + 250 + 320 = 3.2TB

Am I missing something or making a false assumption somewhere?

View 4 Replies View Related

CentOS 5 :: Software RAID - Starting Array With Mdadm

Jul 15, 2010

I've been having troubles with software raid. In particular, the raid array becomes un "assembleable" after reboots. The config is CentOS 5, 4 sata discs (one by 160 containing OS, no raid and 3 2TB disks configured as a RAID 5 array - no spare drive). These drives were configured in anaconda and all seemed to go well (the drive and its lvm partitions worked and it finished rebuilding overnight). A couple of reboots later the drives cannot be assembled anymore and the machine won't boot. The error message says:

mdadm: /dev/md0 assembled from 1 drive and 1 spare - not enough to start the array.

Of course there are 3 drives and no spares in the array as configured. Manually starting the array with mdadm --assemble --scan gives the same message as does assembling the drive by specifying the individual parts. /proc/mdstat does recognize the 3 drives and when I look at the partition tables in fdisk, they show as being software raid. What could be wrong or steps to diagnose? I tried configuring the raid drives manually before going the anaconda route. Also, does anyone know I can edit the /etc/fstab file to disable them so the machine will at least boot. The (Repair filesystem) shell has the / drive mounted r/o.

View 7 Replies View Related

Debian Hardware :: Degraded Array On Boot?

May 6, 2011

I recently upgraded from lenny to squeeze, and my raid array is degraded immediately after boot. Some info on my machine: I've got a built-in SATA II chipset with 4 drives /dev/sda-d that I use for my RAID5 system, and an IDE drive /dev/sde. Before I upgraded to squeeze, the IDE drive was at /dev/hda. I did the usual 2-step upgrade (kernel/udev first, reboot, then everything else). After the first reboot, the IDE drive became /dev/sda and the SATA drives were /dev/sdb-e. I updated mdadm.conf to reflect the new drive naming and added /dev/sde to the array; it rebuilt successfully everything was back online. After the 2nd reboot, the IDE drive became /dev/sde and my SATA drives went back to /dev/sda-b. No biggie; updated mdadm.conf again, rebuilt, and everything works.

Now that everything has been upgraded, the RAID array still becomes degraded upon boot. I can always add /dev/sda back to the array, and it's always rebuilt successfully. Here are some interesting lines from dmesg:It finds all my drives:

[ 2.376202] scsi 0:0:0:0: Direct-Access ATA ST31000340AS AD14 PQ: 0 ANSI: 5
[ 2.376636] scsi 1:0:0:0: Direct-Access ATA ST31000340AS AD14 PQ: 0 ANSI: 5
[ 2.376968] scsi 2:0:0:0: Direct-Access ATA ST31000340AS AD14 PQ: 0 ANSI: 5

[code]....

View 3 Replies View Related

Ubuntu Servers :: Partitioning >2TB RAID Array?

Mar 26, 2011

I have an Areca hardware RAID array that I'm trying to format & partition on a fresh Ubuntu 10.04 LTS installation. The OS drive is not on the RAID card, it's entirely separate. The RAID is a 6TB volume so I realize I have to use parted to format it, not fdisk (which I've always relied on).

My problem is that I can't figure out how to get parted to like my settings. It seems like everything I try gives me the warning "Warning: The resulting partition is not properly aligned for best performance." Here's what I'm doing:

Code:
(parted) p
Model: Areca ARC-1280-VOL#00 (scsi)[code].....

What start/end settings should I use to get a properly aligned partition? How do I know?I have tried a mix and match of 0, 0s, 1, 1s, -0, -0s, -1, -1s, 100% for my start/end with no success.

View 8 Replies View Related

Ubuntu Servers :: Lost Partiontable On RAID 1 Array?

Jun 4, 2010

I just restarted my server (Ubuntu 9.04 server, running on ESXi 4.0) and while copying files onto the server using samba I got strange problems and the connection was lost. When I rebooted the total system, so ESXi as well as Ubuntu Server I did find problems on my RAID disk.

The directory, where the new files were added I have a lot of files, but a lot of them do not have any info except their name:

1304 -rw-rw-rw- 1 spoorhobby spoorhobby 1327274 2010-05-15 22:10 DSCF1895.JPG
? -????????? ? ? ? ? ? DSCF1896.JPG
? -????????? ? ? ? ? ? DSCF1897.JPG
? -????????? ? ? ? ? ? DSCF1898.JPG

[Code].....

Both mirror disks are still functioning and I can still add/delete files, from the server, from other LINUX systems and from other Windows systems via samba.

I did make a full backup on a different server.

View 9 Replies View Related

Ubuntu Servers :: WRITE Performance Down On RAID 1 Array

Sep 7, 2010

I'm currently experiencing some serious issues with WRITE performance on a RAID-1 array. I'm running Ubuntu 10.04 64 bit server with the latest updates. To evaluate the performance ran the following test: [URL]... (great article btw!) Using dd to measure, write performance is only at 8.7 MB/s. Read is great though at 74.5 MB/s. The tests were ran straight after rebooting and I have not (YET!) done any kernel tuning or customization, running the default server package of the Ubuntu kernel. Here's the motherboard in the server: [URL]... with a beta bios to support drives over 300GB.

[code]...

As you can see from the bo column there is definitely something stalling. As per top output, the %wa (waiting for i/o) is always around %75 however as per above, writes are stalling. CPU is basically idle all the time. Hard drives are quite new and smartctl (smartmontools) does not detect any faults.

View 4 Replies View Related

Ubuntu Servers :: RAID Array Not Mounting Correctly

Jun 6, 2011

I have an ubuntu 10.04 machine that I use primarily as a file server. I have a RAID5 array built with mdadm from 3 component disks that worked properly until a recent upgrade (I'm not sure exactly what broke it though). The array is /dev/md0 and is set to mount at /var/media on bootup. *Now*, when the system cold boots it hangs partway through the bootup sequence and throws the following error:

The disk drive for /var/media is not ready yet Press S to skip ... Once I "S"kip this manually, I can see that LOWER in the boot sequence mdadm gets called and assembles the drive, and once fully booted into the system I can then simply do a "mount -a" and the array mounts properly. SO... my gut feeling is that some portion of one of the upgrades changed the order in which things are called, and now the "mdadm assemble" is not triggered until AFTER the system tries to mount the drives. My problem is that I don't know the stuff that controls the boot sequence well enough to dig in the right place.

As a workaround I can remove that entry from /etc/fstab, but then (of course) the system won't auto-mount the array. It's better than the boot process completely hanging because as least THIS I can fix remotely, but I'd really like to know

1) why this broke in an upgrade and is it a known problem?
2) how to get it back to where it auto-assembles and then auto-mounts the array on bootup.

View 9 Replies View Related

Ubuntu Servers :: Monitor A Buffalo Raid Array?

Jun 6, 2011

I have 10.04 server with a linkstation raid 5 attached via usb. What is the best way to monitor the drives for a failure? Its at a remote site

View 2 Replies View Related

Ubuntu Servers :: RAID 5 Software Array With 3TB Drives

Jun 15, 2011

I am trying to use 3 3TB Western Digital drives in a raid 5 software array. The trouble seems to be that the array is created with only 1.5 TB of capacity, rather then the expected 6 TB.

Here are the commands and output:
$ sudo dmraid -f isw -C BackupFull6 --type 5 --disk /dev/sde,/dev/sdf,/dev/sdg --size=5589G
Create a RAID set with ISW metadata format
RAID name: BackupFull6
RAID type: RAID5
RAID size: 5589G (11720982528 blocks)
RAID strip: 64k (128 blocks)
DISKS: /dev/sde, /dev/sdf, /dev/sdg
About to create a RAID set with the above settings. Continue ? [y/n] :y

$ sudo dmraid -s
*** Group superset isw_cdjhcaegij
--> Subset
name: isw_cdjhcaegij_BackupFull6
size : 3131048448
stride : 128
type : raid5_la
status : ok
subsets: 0
devs : 3
spares : 0

So I cannot understand why the size of the created array is only 3131048448 or about 1.5 TB. The first command seemed to imply it was going to create an array with 5589GB.

System is:
Description: Ubuntu 10.04.2 LTS
Release: 10.04
Codename: lucid

View 8 Replies View Related

Ubuntu Servers :: Raid Array Incorrectly Assembled On Boot

Feb 20, 2011

I've got a couple of new hard disks that I have partitioned (3 partitions per disk) and set up in a mirrored software raid array using mdadm. They've synced, I've put file systems on them (1 x ext4, 2 x luks + ext4) and I can mount them. I've checked the partitions using fdisk. I've checked the filesystems using fsck. So far so good. Next step is that I'd like mdadm to automatically assemble them on boot. (Not bothered about mounting and crypttabing yet.)

I've used sudo /usr/share/mdadm/mkconf to generate a new mdadm.conf with the appropriate UUIDs for the new partitions. I've checked that this matches the output of sudo mdadm --detail --scan

The new lines in this file are:

ARRAY /dev/md9 level=raid1 num-devices=2 UUID=470fb8a6:45561fe0:ebda4a02:9ba7a1ed
ARRAY /dev/md10 level=raid1 num-devices=2 UUID=f351fbba:c704a4b2:ebda4a02:9ba7a1ed
ARRAY /dev/md8 level=raid1 num-devices=2 UUID=c6ccec17:2274588e:ebda4a02:9ba7a1ed

To check that the mdadm.conf is fine I have stopped the new arrays:

[Code].....

View 7 Replies View Related

Ubuntu Servers :: Make A Service To Depend On A RAID Array?

Mar 7, 2011

Short story: I have a problem with one of my services (mediatomb) - it requires an md RAID array to be mounted in order to start, because it uses files from it. $remote_fs is added by default to the "Required-Start" line of the init script, so I thought that this should be enough. However, the mediatomb service fails to start on boot, but starts just fine when I execute "service mediatomb start" later. The array is entered in /etc/fstab and is automatically mounted on boot.

Long story...

This is my file server (Ubuntu Server 10.10), which has a raid array created with mdadm (mounted on /z), and the root filesystem is located on an USB thumb drive. I've installed mediatomb, but I wanted to put its database files on the raid array instead of the root fs, so I've symlinked /var/lib/mediatomb (the default path) to /z/mediatomb on the array. This is because the mediatomb DB is supposed to be updated fairly often, so I didn't want it to stay on the flash drive.

Problem is, the mediatomb service can't start on boot - in /var/log/mediatomb.log, it says "2011-03-07 19:22:47 ERROR: /var/lib/mediatomb : 20 x No such file or directory". As I said, it works fine when manually started later...

This is the fstab entry for the raid array code...

View 1 Replies View Related

Hardware :: Raid1 Mdadm Repair Degraded Array With Used Good Hard Drive?

Jun 27, 2009

I have a used but good harddrive which I'd like to use as a replacement for a removed harddrive in existing raid1 array. mdadm --detail /dev/md00 0 0 -1 removed1 8 17 1 active sync /dev/sdb1I thought I needed to mark the removed drive as failed but I cannot get mdadm set it to "failed". I issue mdadm --manage /dev/md0 --fail /dev/sda1But mdadm response is:mdadm: hot remove failed for /dev/sda1: no such device or addressI thought I must mark the failed drive as "failed" to prevent raid1 from trying to mirror in wrong direction when I install my used-but-good disk. I want to reformat the good used drive first right? I believe I must prevent raid array from automatically try to mirror in the wrong direction.

View 7 Replies View Related

Ubuntu Installation :: 10.04->11.04:often Get Degraded Raid Error At Boot?

Sep 1, 2011

I recently upgraded from 10.04 to 11.04 and I now often get boot messages about a degraded raid.

I'm fairly experienced, but I'm confused which raid it is talking about. I have a raid5 array, but I don't boot of that, and it seems fine when I finally get it to boot. Previously, I didn't have any other raid arrays[1], but now I seem to have two others called md126 and md127, they both seem to be degraded. Where did they come from?

[1] I *do* have two 80GB drives that I was booting from in RAID1, but that was a looong time ago, and I have since only booted from one of them. The partition table indeed shows partitions 1 and 5 are raid autodetect and /proc/mdstat shows they are degraded ([U_]). Could it be that this is causing the problem? If so, why has this only started to happen since the upgrade from 10.04 to 11.04?Anyway, perhaps it is a good idea to add in that second disk to the raid1 array. If so, how to do that? Note that, I've also noticed that when I boot and get to the screen when I select from the different kernel versions, I now get a couple of really old ones too - my thought is that these are from the raid1 disk that I stopped using. If I add it to the array, how can I be sure it will mirror in the correct direction?

It could be that I have fairly recently plugged in that second RAID1 disk, after a long time of not having enough spare sata sockets (I switched my RAID5 array from 8 disks to only 3 disks, so suddenly had a lot more spare sockets).

View 9 Replies View Related

Ubuntu Servers :: Mdadm RAID 6 Array With Si 3132 SATA Controller ?

Mar 12, 2010

I've recently started having an issue with an mdadm RAID 6 array that been operational for about 2500 hours.

Intermittently during write operations the array stalls, dropping to almost 0 write speed for 10-30 seconds. When this occur one or both of the 2 drives attached to a 2 port Silicon Image si3132 SATA-II controller "locks up" with its activity light locked on. This just started occurring within the last week and didn't seem to coincide with any update that i noticed. The array has just recently passed 12.5% full. The size of the write does not seem to make any difference and it seems completely random. Some times copying a 5 GB dataset results in no slow down other times a torrent downloading to the array at 50kb/sec does cause a slow down and vise versa.

The array consists of 8 WD 1.5TB drives, 6 attached to the ICH9R south bridge, and 2 attached to a si3132 based PCI express card. The array is formatted as a single ext4 partition.

Checking SMART data for all drives shows no errors. Testing read speed with hdparm reports what i would expect (100mb/sec for each drive, ~425mb/sec for the array).

The only thing i did notice is that udma6 is enabled for all the ICH9R drives while only udma5 is enabled for the si3132 drives. Write cache is enabled for all the disks. Attempting to set the si3132 drive to udma6 results in an IO error from hdparm.

The si3132 drive is using the sata_sil24 driver. Nothing of interest appears in the kern or syslog. During this time top shows very high wait time.

The s13132 controller appears to have the original firmware from 2006 loaded, there are some firmware updates available on the Silicon Image website for this controller that now appear to offer separate firmwares for RAID operation (some sort of hybrid controller/software thing the controller supports) and a separate firmware for standard IDE use.

Has anyone had similar issues with this controller? Is a firmware update a reasonable course of action? If so which firmware is best supported by the linux driver?

I know i'm not using its raid features but i've dealt with controllers that needed to be in raid mode for ahci to be active and for linux to work well with them. I'm bit ify at the idea of just trying it and finding out as it could knock 2 disks of my array out of action.

View 2 Replies View Related

Ubuntu Servers :: Simulate A Failed Raid Array On A Pair Of 2tb Disks?

Feb 26, 2011

Using a fresh copy of server 10.04 im trying to simulate a failed raid array on a pair of 2tb disks. Here is the procedure i have been following so far:

- Remove the dead disk partitions from each of the raid 1 arrays (substitute the correct md devices and partitions)
- mdadm /dev/md0 -r /dev/sdb2
- mdadm /dev/md1 -r /dev/sdb3

[code]....

I get an error here that sfdisk does not support gpt (guid partition table). I thought sfdisk did support gpt? It says to use parted, but i cant find a command that copies a partition table over from another disk in parted documentation. Any suggestions? I suppose i could make the partitions manually, but im writing a procedure for people who arent that technical and i need it to be simple enough to be run in my absence. manually building the partitions would be too hard for them.

View 2 Replies View Related

Ubuntu Servers :: Mounting Existing RAID Array On Fresh Installation?

Aug 1, 2011

I'm running 10.04 x86 server with a really simple installation on a single 250GB boot disk. I then have a RAID5 array as /dev/md0 (set up using mdadm with x4 2TB disks). All is working well. My mdadm.conf file looks like this

Code:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.

[code]....

if I was to lose the boot disk and need to remount the RAID array on a fresh installation, what steps do I need to go through. My assumption is that the superblocks on the RAID disks will be used and I don't need to keep any additional information - is this right?

View 6 Replies View Related

General :: RAID 1 With 2 Disks Starts Degraded After Reboot From 3rd

Jun 20, 2010

Basically, I installed Debian Lenny creating two RAID 1 devices on two 1 TB disks during installation. /dev/md0 for swap and /dev/md1 for "/"
I did not pay much attention, but it seemed to work fine at start - both raid devices were up early during boot, I think. After that I upgraded the system into testing which involved at least upgrading GRUB to 1.97 and compiling & installing a new 2.6.34 kernel ( udev refused to upgrade with old kernel ) Last part was a bit messy, but in the end I have it working.

Let me describe my HDDs setup: when I do "sudo fdisk -l" it gives me sda1,sda2 raid partitions on sda, sdb1,sdb2 raid partitions on sdb which are my two 1 TB drives and sdc1, sdc2, sdc5 for my 3rd 160GB drive I actually boot from ( I mean GRUB is installed there, and its chosen as boot device in BIOS ). The problem is that raid starts degraded every time ( starts with 1 out of 2 devices ). When doing " cat /proc/mdstat " I get "U_" statuses and 2nd devices is "removed" on both md devices.

I can successfully run partx -a sdb, which gives me sdb1 and sdb2 and then I readd those to raid devices using " sudo mdadm --add /dev/md0 /dev/sda1 ". After I read devices it syncs the disks and after about 3 hours I see fine status in mdstat. However when I reboot, it again starts with degraded array. I get a feeling that after I read the disk and sync array I need to update some configuration somewhere, I tried to " sudo mdadm --examine --scan " but its output is no different from my current /etc/mdadm/mdadm.conf even after I readd the disks and sync.

View 1 Replies View Related

Ubuntu Servers :: Mdadm - Why /dev/sdb1 And /dev/sdi1 Show As Both Ext2fs And Also As Part Of A RAID Array

May 31, 2011

I've been having some problems w/ a my RAID 5 array, and after extensive investigation, I'm fairly sure that my last resort is rebuilding the array. I'd tried --assemble, b/c it's a previously created array, but it didn't seem to like that. So, I checked into --create, and it will re-create the array w/out destroying the data, if the superblocks are persistent, which they seem to be. However, here's what I get:

[Code]....

My question is: why do /dev/sdb1 and /dev/sdi1 show as both ext2fs and also as part of a RAID array?

View 3 Replies View Related

Ubuntu Servers :: Mounting Large (12TB) SCSI Attached RAID Array (formatted With Ntfs) ?

Feb 16, 2010

I have a large RAID array of 12 TB attached to one of my Ubuntu server machines. The RAID volume is formatted with NTFS. The problem is that I can not mount this volume in Ubuntu. I can read it normally if I attach it to windows machine.This is the output from "sudo fdisk -l":

sudo fdisk -l
Disk /dev/sda: 164.7 GB, 164696555520 bytes
255 heads, 63 sectors/track, 20023 cylinders[code]........

View 2 Replies View Related

General :: Raid Apparently Degraded / Disks Mounted Read Only / How To Proceed

Apr 9, 2011

I have a 3ware controller that has a RAID 1 of two SATA disks.After an outage, the linux box (which is running ubuntu), restarted and the partition is now mounted read only. I only have the "/" mount point (this is a test server).Now, if I go to the 3ware controller by pressing ALT-3 while booting, I don't see any indication that there is something wrong with the disks.If I let the computer boot, I'm asked by fdisk if I want to fix/ignore/etc the inconsistencies found.

View 1 Replies View Related

Fedora X86/64bit :: Use Fedora / Linux Raid Program To Manage Raid Array?

Jun 24, 2009

I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.

I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??

View 2 Replies View Related

Ubuntu :: Raid 1 Array Not Showing Up?

Aug 28, 2010

I am using Ubuntu 10.04 x64. I am not trying to install Ubuntu on a RAID 1 drive like all of the guides are for. I have a RAID 1 array that I am using for data storage. In windows it shows as a single array just fine. In linux it shows as 2 separate drives. I don't care how they show up to be honest I just have to data written to one drive written to the other automatically as well so my RAID isn't screwed up. Looking through different articles and forums I find a lot of stuff saying that it should show up under /dev/mapper/dxxx or something under /dev/mapper. All that shows up there for me is a device called control which doesn't seem to do something.

View 1 Replies View Related

Ubuntu :: GPT Nvidia RAID Array In 10.10?

Nov 4, 2010

I just installed Ubuntu 10.10 64bit and wanted to get access to my nvidia RAID array. This array is working, and is NTFS formatted. But wasn't showing up through normal means in Ubuntu. (for example the NTFS Configuration Tool didn't display it) Here's what the system showed.

Code:

root@hermes:~# ls -l /dev/mapper/
total 0
crw------- 1 root root 10, 59 2010-11-03 22:39 control
lrwxrwxrwx 1 root root 7 2010-11-03 22:42 nvidia_dadijiag -> ../dm-0

[code]....

Is my mirror still in effect, or did i just mount one of the specific drives from the mirror?

View 1 Replies View Related

Ubuntu :: Can't Remove RAID 1 Array?

Jan 24, 2011

I'm going a little bit crazy. I can't seem to remove my RAID 1 arrays. Any suggestions? I don't need to save data. The drives are empty. I'm upgrading to 4 2TB drives.Running Lucid Lynx server

Code:
jessica@nas:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb1[1]
976759936 blocks [2/1] [_U]
md0 : active raid1 sdd1[0]

[Code]...

View 5 Replies View Related

Ubuntu :: "reconnect" To The RAID Array Even If The "mothersystem" Of The Software RAID Is Lost?

Oct 19, 2010

Consider the following setup: Ubuntu system installed on a separate SSD for speed. An ubuntu software RAID array consisting of X number of physical HDD's for storage (RAID6 or RAID10). RAID setup is done during system install. If I suffer a total crash of the SSD and loose my system, will I be able to, using a new system disk, "reconnect" to the RAID array even if the "mothersystem" of the software RAID is lost? If yes, are there any particular config- or system files I need to backup to be able to rescue the array or will it just be recognized "out-of-the-box" when reinstalling ubuntu?

View 4 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved