Ubuntu Servers :: Simulate A Failed Raid Array On A Pair Of 2tb Disks?

Feb 26, 2011

Using a fresh copy of server 10.04 im trying to simulate a failed raid array on a pair of 2tb disks. Here is the procedure i have been following so far:

- Remove the dead disk partitions from each of the raid 1 arrays (substitute the correct md devices and partitions)
- mdadm /dev/md0 -r /dev/sdb2
- mdadm /dev/md1 -r /dev/sdb3

[code]....

I get an error here that sfdisk does not support gpt (guid partition table). I thought sfdisk did support gpt? It says to use parted, but i cant find a command that copies a partition table over from another disk in parted documentation. Any suggestions? I suppose i could make the partitions manually, but im writing a procedure for people who arent that technical and i need it to be simple enough to be run in my absence. manually building the partitions would be too hard for them.

View 2 Replies


ADVERTISEMENT

General :: 2 Disks Failed Simultaneously On A RAID 5 Array?

Apr 15, 2011

I have a home server running Openfiler 2.3 x64 with 4x1.5TB software RAID 5 array (more details on the hardware and OS later). All was working well for two years until several weeks ago, the array failed with two faulty disks at the same time. Well, those thing could happen, especially if one is using desktop-grade disks instead of enterprise-grade ones (way too expensive for a home server). Since is was most likely a false positive, I've reassembled the array:

Code:

# mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: forcing event count in /dev/sdb1(0) from 110 upto 122
mdadm: forcing event count in /dev/sdc1(1) from 110 upto 122

[code]....

Right. Once is just a coincident but twice in such a sort period of time means that something is wrong. I've reassembled the array and again, all the files were intact. But now was the time to think seriously about backing up my array, so I've ordered a 2TB external disk and in the meantime kept the server off. When I got the external drive, I hooked it up to my Windows desktop, turned on the server and started copying the files. After about 10 minutes two drives failed again. I've reassembled, rebooted and started copying again, but after a few MBs, the copy process reported a problem - the files were unavailable. A few retried and the process resumed, but a few MBs later it had to stop again, for the same reason. Several more stops like those and two disks failed again. Looking at the /var/log/messages file, I found a lot of error like these:

Quote:

Apr 12 22:44:02 NAS kernel: [77047.467686] ata1.00: configured for UDMA/33
Apr 12 22:44:02 NAS kernel: [77047.523714] ata1.01: configured for UDMA/133
Apr 12 22:44:02 NAS kernel: [77047.523727] ata1: EH complete

[code]....

The motherboard is Gigabyte GA-G31M-ES2L based on Intel's G31 chipset, the 4 disks are Seagate 7200.11 (with a version of a firmware that doesn't cause frequent data corruption).

View 4 Replies View Related

Ubuntu Servers :: Does Replacing Failed Disk On Raid5 Change Any Data On Other Disks In Raid

Jan 9, 2011

I've got a raid5 array of 4 disks with ubuntu 8.04 runing on it that is currently still working:

/dev/sda
/dev/sdb
/dev/sdc
/dev/sdd

Smartmontools for /dev/sdc tell that there are 9 sectors pending for reallocation:

Code:

197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 9
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 9
And /dev/sdd has increasing number of reallocated sectors (about 1 every couple of minutes):

Code:

5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 1735
/dev/sdc has failed a coulple of times this week (but I have always sucessfully readded it to raid5) . But the increasing number of reallocated sectores on /dev/sdd concerns me even more.

I'm affraid that during removal of /dev/sdd and adding new /devs/sdd disk, raid might fall appart. That's why I would try to do it in Ubuntu Live CD:If the raid falls appart (/dev/sdc fails) during the readding of new /dev/sdd disk, I might still remove the new /dev/sdd and return the previous one and assemble the raid with:

/dev/sda
/dev/sdb
/dev/sdd (old one that was previously removed)

Does assembling Raid in Ubuntu Live and adding new disk for /dev/sdd write anything on /dev/sda, /dev/sdb and /dev/sdc in the process of adding /dev/sdd into raid5?

View 2 Replies View Related

Ubuntu Installation :: Merge 1TB Disks Into And RAID 5 Array?

Apr 11, 2010

I wanted to merge my 1TB disks into and RAID 5 array, 4 of them in RAID 5 is above 2Terabytes limit of msdos partition tables which grub2 can boot from, so I decided to start up the system from scratch, by building it on GPT partitions, but seems grub2 won't boot from GPT partition because it drops to grub rescue and I can't really do anything from there.

here's my set up:

/dev/md0 (raid 1) - 100MB total:
- dev/sda1, /dev/sdb1, /dev/sdc1, /dev/sdd1
/dev/md1 (raid 5) - 45GB total:
- dev/sda2, /dev/sdb2, /dev/sdc2, /dev/sdd2
/dev/md2 (raid 5) - something bit lower than 3TB:
- dev/sda3, /dev/sdb3, /dev/sdc3, /dev/sdd3

any tips how to have this system up and running? Because I've spent like 3 days jumping over various problems

View 8 Replies View Related

Software :: RAID Mdadm Cant Add Disks To Array?

Sep 10, 2010

I have a 7-drive RAID array on my computer. Recently, my SATA PCI card died, and after going through multiple cards to find another one that worked with linux, I now can't assemble the array. The drives are no longer in the order they were in previously, and mdadm can't seem to reassemble the array. It says there are 2 drives and one spare, even though there were 7 drives and no spares. I know for a fact that none of the drives are corrupted, because one of the non-working RAID cards was still able to mount the array for a short period, but would loose the drives during resyncing (I later found out that the chipset on the card was had extremely limited linux support). I have tried running "mdadm --assemble --scan" and after the drive is partially assembled, I add the other drives with "mdadm --add /dev/md0 /dev/sdc1". These both return errors and will not complete on the new raid card.

Code:
aaron-desktop:~ aaron$ sudo mdadm --assemble /dev/md0
mdadm: /dev/md0 assembled from 2 drives and 1 spare - not enough to start the array.

[code]....

View 4 Replies View Related

Debian Configuration :: Reorganizing Disks In MD RAID Array

Mar 4, 2010

I'm trying to do some RAID managing with mdadm. I would like to sync my spare disk and then remove it from the array for making a backup out of it with dd command (the best way i can think of to get the current image of the whole system as it can't be done using the active RAID as source, because is constantly in use and changing). So, I have RAID1 array with 1 spare and 2 active disks (configuration listed below). Now I would like to force spare to sync and then remove it from array, although not faulty.

However, mdadm man page states:
"Devices can only be removed from an array if they are not in active use. i.e. that must be spares or failed devices. To remove an active device, it must be marked as faulty first."

So, I'd have to mark a disk as faulty (which it is not) to be able to remove it from array. There seems to be several people reporting that they can't remove this faulty flag accidentally given to a drive. And mdadm does not give direct for such operation. Isn't there a way I could remove and add disks whenever feeling like it?? One way would be open the cover and physically remove the disk. I'm not taking the risk, though. System is almost always in use, so there is not much chance for me to power off for temporary disk removal.

RAID CONFIGURATION:
~# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri Aug 4 17:38:26 2006
Raid Level : raid1
Array Size : 238950720 (227.88 GiB 244.69 GB)

View 3 Replies View Related

Hardware :: Looking For Good Hard Disks To Use In Raid 1 Array

Jan 13, 2010

I'm looking to stock my SuperMicro P8SCi with two 1-2 TB SATA hard discs, for running backups and web hosting. There are reviews of certain disks stating that the low-power disks will get kicked out of the Raid due to their slow response time, and it also appears that there have been quality problems with these newer disks, as if the race to size has lowered their reliability.

Can someone recommend a good brand and specific disks that you've had experience with? I'd rather not need to replace these after putting them in, but I also don't want to pay significantly more for an illusion of quality.

View 2 Replies View Related

Server :: RAID 6 Array Coming Up With All Disks As Spare

Mar 25, 2011

I have been running a server with an increasingly large md array and always been plagued with intermittent disk faults. For a long time, I've attributed those to either temperature or power glitches. I had just embarked on a quest to a) lower case and drive temperature. They were running between 43 and 47C, sometimes peaking at 52C, so I've added more case fan power and made sure the drive cage was in the flow (it has it's own fan, too). Also, I've upgraded my power supply and made very sure that all the connectors are good. The array currently is a RAID6 with 5 Seagate 1,5TB drives.

When everything seemed to be working fine, I looked at my SMART logs and found that two of my drives (both well over 14000 operating hours) were showing uncorrectible bad blocks. Since it's RAID6, I figured, I couldn't do much harm, ran a badblocks test on it, zeroed the blocks that were reported bad, figuring the drive defect management would remap them to a good part of the disk and zeroed the superblock. I then added it back to the pack and the resync started. At around 50%, a second drive decided to go and shortly thereafter a third. Now, with two out of five drives, RAID6 will fail. Fine. At least, no data will be written to it anymore, however, now I cannot reassemble the array anymore.

Whenever I try I get this:
Code:
mdadm --assemble --scan
mdadm: /dev/md1 assembled from 2 drives and 2 spares - not enough to start the array

Code:
cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [linear]
md1 : inactive sdf1[4](S) sde1[6](S) sdg1[1](S) sdh1[5](S) sdd1[2](S)
7325679320 blocks super 1.0
md0 : active raid1 sdb2[0] sdc2[1]
312464128 blocks [2/2] [UU]
bitmap: 3/149 pages [12KB], 1024KB chunk

Which is not fine. I'm sure that three devices are fine (normally, a failed device would just rejoin the array, skipping most of the resync by way of the bitmap) so I should be able to reassemble the array with the two good ones and the one that failed last, then add the one that failed during the resync and finally re-add the original offender. However, I have no idea how to get them out of the "(S)" state.

Code:
mdadm --examine /dev/sdd1
/dev/sdd1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : d79d81cc:fff69625:5fb4ab4c:46d45217 .....

View 2 Replies View Related

Debian Hardware :: RAID As Multiple Disks - Configuring Array?

Dec 2, 2010

Alright, I have this issue on both SystemRescueCD and Debian Squeeze. I have an ASUS P5Q Turbo board that supports hardware RAID. If I configure an array and then start the Linux installer or boot the rescue CD, I get /dev/sda and /dev/sdb instead of an array. What gives? I need to start installing within the hour so I am desperate for an answer!

View 1 Replies View Related

Ubuntu Servers :: Creation Of RAID-0 Array In Disk Utility Resulting In Smaller Than Expected Array?

Sep 27, 2010

I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).

The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:

160 + 250 + 250+ 750 + 250 +200 + 200 + 250 + 320 + 250 + 320 = 3.2TB

Am I missing something or making a false assumption somewhere?

View 4 Replies View Related

Ubuntu :: MDADM RAID 5 Failed But Disks Are Still Present?

Jun 7, 2010

I just had a whole 2TB Software RAID 5 blow up on me. I rebooted my server, which i hardly ever do and low and behold i loose one of my raid 5 sets. It seems like two of the disks are not showing up properly.. What i mean by that is the OS picks up the disks, but it doesnt see the partitions.

I ran smartct -l on all the drives in question and they're all in good working order.

Is there some sort of repair tool i can use to scan the busted drives (since they're available) to fix any possible errors that might be present.

Here is what the "good" drive looks like when i use sfdisk:

Quote:

sudo sfdisk -l /dev/sda
Disk /dev/sda: 121601 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/sda1 0+ 121600 121601- 976760001 83 Linux
/dev/sda2 0 - 0 0 0 Empty

[Code]....

View 2 Replies View Related

Ubuntu Servers :: Mdadm Incoonsistent Status On Disks In Same Array

Jan 21, 2011

when I start my raid5, only 2 disks of 3 are active on md0. The 3rd disk is inactive on md_d0.When I do mdadm --examine, the two active disks report 2 active, 2 working, 1 failed. the inactive disk resports 3 active, 3 working, 0 failed.

View 2 Replies View Related

OpenSUSE Hardware :: Recovering MySQL Db From Failed RAID Array

Jun 30, 2011

I recognize that this isn't the typical question, but I have a problem with my OpenSUSE webserver, and I thought I would prevail on the community for some guidance. I have this webserver with an important MySQL db on it. The RAID array seems to have died while I was moving. (did someone drop it? dunno) Now it can't find any boot device. It has 4 old SCSI drives.

So, I know how to mount a IDE or SATA drive as a slave, in a Linux environment to read data off of it (to copy the MySQL files off of it.) But, how do I do that with a SCSI drive? Also, I have an additional (identical) server to the crippled one. What will happen if I just slide one of the scsi drives into the operating server? Is this second identical server going to help me at all? I don't even know what is on it. Can I reconfigure the RAID, so it's not using a drive, and then slide in a disk from the crippled server, and copy the data off of it?

View 2 Replies View Related

Ubuntu Servers :: Create A New Mdadm RAID 5 Device /dev/md0 Across Three Disks?

Feb 5, 2011

I am trying to create a new mdadm RAID 5 device /dev/md0 across three disks where such an array previously existed, but whenever I do it never recovers properly and tells me that I have a faulty spare in my array. More-specific details below. I recently installed Ubuntu Server 10.10 on a new box with the intent of using it as a NAS sorta-thing. I have 3 HDDs (2 TB each) and was hoping to use most of the available disk space as a RAID5 mdadm device (which gives me a bit less than 4TB.)

I configured /dev/md0 during OS installation across three partitions on the three disks - /dev/sda5, /dev/sdb5 and /dev/sdc5, which are all identical sizes. The OS, swap partition etc. are all on /dev/sda. Everything worked fine, and I was able to format the device as ext4 and mount it. Good so far.

Then I thought I should simulate a failure before I started keeping important stuff on the RAID array - no point having RAID 5 if it doesn't provide some redundancy that I actually know how to use, right? So I unplugged one of my drives, booted up, and was able to mount the device in a degraded state; test data I had put on there was still fine. Great. My trouble began when I plugged the third drive back in and re-booted. I re-added the removed drive to /dev/md0 and recovery began; things would look something like this:

Code:
user@guybrush:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc5[3] sdb5[1] sda5[0]
3779096448 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]

[Code]...

View 9 Replies View Related

Ubuntu Servers :: Partitioning >2TB RAID Array?

Mar 26, 2011

I have an Areca hardware RAID array that I'm trying to format & partition on a fresh Ubuntu 10.04 LTS installation. The OS drive is not on the RAID card, it's entirely separate. The RAID is a 6TB volume so I realize I have to use parted to format it, not fdisk (which I've always relied on).

My problem is that I can't figure out how to get parted to like my settings. It seems like everything I try gives me the warning "Warning: The resulting partition is not properly aligned for best performance." Here's what I'm doing:

Code:
(parted) p
Model: Areca ARC-1280-VOL#00 (scsi)[code].....

What start/end settings should I use to get a properly aligned partition? How do I know?I have tried a mix and match of 0, 0s, 1, 1s, -0, -0s, -1, -1s, 100% for my start/end with no success.

View 8 Replies View Related

Ubuntu Servers :: Lost Partiontable On RAID 1 Array?

Jun 4, 2010

I just restarted my server (Ubuntu 9.04 server, running on ESXi 4.0) and while copying files onto the server using samba I got strange problems and the connection was lost. When I rebooted the total system, so ESXi as well as Ubuntu Server I did find problems on my RAID disk.

The directory, where the new files were added I have a lot of files, but a lot of them do not have any info except their name:

1304 -rw-rw-rw- 1 spoorhobby spoorhobby 1327274 2010-05-15 22:10 DSCF1895.JPG
? -????????? ? ? ? ? ? DSCF1896.JPG
? -????????? ? ? ? ? ? DSCF1897.JPG
? -????????? ? ? ? ? ? DSCF1898.JPG

[Code].....

Both mirror disks are still functioning and I can still add/delete files, from the server, from other LINUX systems and from other Windows systems via samba.

I did make a full backup on a different server.

View 9 Replies View Related

Ubuntu Servers :: Starting Degraded Raid 5 Array?

Jun 11, 2010

so my servers 7 hds in raid 5 all was working well until one of them died. The HD that died sort of works it can read like half a file also freezes on the benchmark test in disk utility. Unfortunate when i take it out on boot it says. The drive for /media_kbt is not ready or present press s to skip or m for manual recovery. I hit s and then go to disk utility. But i can't start or add disks to the array.

Here is me trying to do random stuff

Code:
administrator@3dslice-host:~$ sudo mdadm --stop /dev/md0
[sudo] password for administrator:
mdadm: metadata format 00.90 unknown, ignored.
mdadm: stopped /dev/md0
administrator@3dslice-host:~$ sudo mdadm --add /dev/md0 /dev/sda1
mdadm: metadata format 00.90 unknown, ignored.

[Code]...

View 2 Replies View Related

Ubuntu Servers :: WRITE Performance Down On RAID 1 Array

Sep 7, 2010

I'm currently experiencing some serious issues with WRITE performance on a RAID-1 array. I'm running Ubuntu 10.04 64 bit server with the latest updates. To evaluate the performance ran the following test: [URL]... (great article btw!) Using dd to measure, write performance is only at 8.7 MB/s. Read is great though at 74.5 MB/s. The tests were ran straight after rebooting and I have not (YET!) done any kernel tuning or customization, running the default server package of the Ubuntu kernel. Here's the motherboard in the server: [URL]... with a beta bios to support drives over 300GB.

[code]...

As you can see from the bo column there is definitely something stalling. As per top output, the %wa (waiting for i/o) is always around %75 however as per above, writes are stalling. CPU is basically idle all the time. Hard drives are quite new and smartctl (smartmontools) does not detect any faults.

View 4 Replies View Related

Ubuntu Servers :: RAID Array Not Mounting Correctly

Jun 6, 2011

I have an ubuntu 10.04 machine that I use primarily as a file server. I have a RAID5 array built with mdadm from 3 component disks that worked properly until a recent upgrade (I'm not sure exactly what broke it though). The array is /dev/md0 and is set to mount at /var/media on bootup. *Now*, when the system cold boots it hangs partway through the bootup sequence and throws the following error:

The disk drive for /var/media is not ready yet Press S to skip ... Once I "S"kip this manually, I can see that LOWER in the boot sequence mdadm gets called and assembles the drive, and once fully booted into the system I can then simply do a "mount -a" and the array mounts properly. SO... my gut feeling is that some portion of one of the upgrades changed the order in which things are called, and now the "mdadm assemble" is not triggered until AFTER the system tries to mount the drives. My problem is that I don't know the stuff that controls the boot sequence well enough to dig in the right place.

As a workaround I can remove that entry from /etc/fstab, but then (of course) the system won't auto-mount the array. It's better than the boot process completely hanging because as least THIS I can fix remotely, but I'd really like to know

1) why this broke in an upgrade and is it a known problem?
2) how to get it back to where it auto-assembles and then auto-mounts the array on bootup.

View 9 Replies View Related

Ubuntu Servers :: Monitor A Buffalo Raid Array?

Jun 6, 2011

I have 10.04 server with a linkstation raid 5 attached via usb. What is the best way to monitor the drives for a failure? Its at a remote site

View 2 Replies View Related

Ubuntu Servers :: RAID 5 Software Array With 3TB Drives

Jun 15, 2011

I am trying to use 3 3TB Western Digital drives in a raid 5 software array. The trouble seems to be that the array is created with only 1.5 TB of capacity, rather then the expected 6 TB.

Here are the commands and output:
$ sudo dmraid -f isw -C BackupFull6 --type 5 --disk /dev/sde,/dev/sdf,/dev/sdg --size=5589G
Create a RAID set with ISW metadata format
RAID name: BackupFull6
RAID type: RAID5
RAID size: 5589G (11720982528 blocks)
RAID strip: 64k (128 blocks)
DISKS: /dev/sde, /dev/sdf, /dev/sdg
About to create a RAID set with the above settings. Continue ? [y/n] :y

$ sudo dmraid -s
*** Group superset isw_cdjhcaegij
--> Subset
name: isw_cdjhcaegij_BackupFull6
size : 3131048448
stride : 128
type : raid5_la
status : ok
subsets: 0
devs : 3
spares : 0

So I cannot understand why the size of the created array is only 3131048448 or about 1.5 TB. The first command seemed to imply it was going to create an array with 5589GB.

System is:
Description: Ubuntu 10.04.2 LTS
Release: 10.04
Codename: lucid

View 8 Replies View Related

Ubuntu Servers :: RAID-6 Cannot Start Degraded Array

Jun 26, 2011

Ubuntu Server 11.04 i386. I've used linux on and off for years but only in small doses, so I'm really just at newbie level. I was running an Openfiler NAS, but decided to give Ubuntu+Webmin a try. And up 'til now I've been happy with progress. I have set up a RAID-6 array using 5 x 1TB SATA drives. I've ensured that the array is in a "clean" state, and now I want to do some failure testing. The problem occurs when I remove one of the drives in the array. I shutdown, remove a drive, then boot up. The array wont start at all, and comes up with this error during boot:

Quote:

the disk drive for /mnt/raidvol1 is not ready yet or not present
Continue to wait; or Press S to skip mounting or M for manual recovery

If I wait, nothing happens. Obviously the RAID array should start in degraded mode, but it fails to mount at all. When I press "M" to go into manual recovery and type "mount -a" I get the response:

Quote:

mount: special device /dev/RAIDVG1/RAIDLV1 does not exist

I have set BOOT_DEGRADED=true in /etc/initramfs-tools/conf.d/mdadm without success. If I reconnect the disconnected drive, the array works fine, and is in a clean state.

View 9 Replies View Related

Ubuntu Servers :: Raid Array Incorrectly Assembled On Boot

Feb 20, 2011

I've got a couple of new hard disks that I have partitioned (3 partitions per disk) and set up in a mirrored software raid array using mdadm. They've synced, I've put file systems on them (1 x ext4, 2 x luks + ext4) and I can mount them. I've checked the partitions using fdisk. I've checked the filesystems using fsck. So far so good. Next step is that I'd like mdadm to automatically assemble them on boot. (Not bothered about mounting and crypttabing yet.)

I've used sudo /usr/share/mdadm/mkconf to generate a new mdadm.conf with the appropriate UUIDs for the new partitions. I've checked that this matches the output of sudo mdadm --detail --scan

The new lines in this file are:

ARRAY /dev/md9 level=raid1 num-devices=2 UUID=470fb8a6:45561fe0:ebda4a02:9ba7a1ed
ARRAY /dev/md10 level=raid1 num-devices=2 UUID=f351fbba:c704a4b2:ebda4a02:9ba7a1ed
ARRAY /dev/md8 level=raid1 num-devices=2 UUID=c6ccec17:2274588e:ebda4a02:9ba7a1ed

To check that the mdadm.conf is fine I have stopped the new arrays:

[Code].....

View 7 Replies View Related

Ubuntu Servers :: Make A Service To Depend On A RAID Array?

Mar 7, 2011

Short story: I have a problem with one of my services (mediatomb) - it requires an md RAID array to be mounted in order to start, because it uses files from it. $remote_fs is added by default to the "Required-Start" line of the init script, so I thought that this should be enough. However, the mediatomb service fails to start on boot, but starts just fine when I execute "service mediatomb start" later. The array is entered in /etc/fstab and is automatically mounted on boot.

Long story...

This is my file server (Ubuntu Server 10.10), which has a raid array created with mdadm (mounted on /z), and the root filesystem is located on an USB thumb drive. I've installed mediatomb, but I wanted to put its database files on the raid array instead of the root fs, so I've symlinked /var/lib/mediatomb (the default path) to /z/mediatomb on the array. This is because the mediatomb DB is supposed to be updated fairly often, so I didn't want it to stay on the flash drive.

Problem is, the mediatomb service can't start on boot - in /var/log/mediatomb.log, it says "2011-03-07 19:22:47 ERROR: /var/lib/mediatomb : 20 x No such file or directory". As I said, it works fine when manually started later...

This is the fstab entry for the raid array code...

View 1 Replies View Related

Ubuntu Servers :: Mdadm RAID 6 Array With Si 3132 SATA Controller ?

Mar 12, 2010

I've recently started having an issue with an mdadm RAID 6 array that been operational for about 2500 hours.

Intermittently during write operations the array stalls, dropping to almost 0 write speed for 10-30 seconds. When this occur one or both of the 2 drives attached to a 2 port Silicon Image si3132 SATA-II controller "locks up" with its activity light locked on. This just started occurring within the last week and didn't seem to coincide with any update that i noticed. The array has just recently passed 12.5% full. The size of the write does not seem to make any difference and it seems completely random. Some times copying a 5 GB dataset results in no slow down other times a torrent downloading to the array at 50kb/sec does cause a slow down and vise versa.

The array consists of 8 WD 1.5TB drives, 6 attached to the ICH9R south bridge, and 2 attached to a si3132 based PCI express card. The array is formatted as a single ext4 partition.

Checking SMART data for all drives shows no errors. Testing read speed with hdparm reports what i would expect (100mb/sec for each drive, ~425mb/sec for the array).

The only thing i did notice is that udma6 is enabled for all the ICH9R drives while only udma5 is enabled for the si3132 drives. Write cache is enabled for all the disks. Attempting to set the si3132 drive to udma6 results in an IO error from hdparm.

The si3132 drive is using the sata_sil24 driver. Nothing of interest appears in the kern or syslog. During this time top shows very high wait time.

The s13132 controller appears to have the original firmware from 2006 loaded, there are some firmware updates available on the Silicon Image website for this controller that now appear to offer separate firmwares for RAID operation (some sort of hybrid controller/software thing the controller supports) and a separate firmware for standard IDE use.

Has anyone had similar issues with this controller? Is a firmware update a reasonable course of action? If so which firmware is best supported by the linux driver?

I know i'm not using its raid features but i've dealt with controllers that needed to be in raid mode for ahci to be active and for linux to work well with them. I'm bit ify at the idea of just trying it and finding out as it could knock 2 disks of my array out of action.

View 2 Replies View Related

Ubuntu Servers :: Mounting Existing RAID Array On Fresh Installation?

Aug 1, 2011

I'm running 10.04 x86 server with a really simple installation on a single 250GB boot disk. I then have a RAID5 array as /dev/md0 (set up using mdadm with x4 2TB disks). All is working well. My mdadm.conf file looks like this

Code:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.

[code]....

if I was to lose the boot disk and need to remount the RAID array on a fresh installation, what steps do I need to go through. My assumption is that the superblocks on the RAID disks will be used and I don't need to keep any additional information - is this right?

View 6 Replies View Related

Ubuntu Servers :: Mdadm - Why /dev/sdb1 And /dev/sdi1 Show As Both Ext2fs And Also As Part Of A RAID Array

May 31, 2011

I've been having some problems w/ a my RAID 5 array, and after extensive investigation, I'm fairly sure that my last resort is rebuilding the array. I'd tried --assemble, b/c it's a previously created array, but it didn't seem to like that. So, I checked into --create, and it will re-create the array w/out destroying the data, if the superblocks are persistent, which they seem to be. However, here's what I get:

[Code]....

My question is: why do /dev/sdb1 and /dev/sdi1 show as both ext2fs and also as part of a RAID array?

View 3 Replies View Related

Ubuntu Servers :: Mounting Large (12TB) SCSI Attached RAID Array (formatted With Ntfs) ?

Feb 16, 2010

I have a large RAID array of 12 TB attached to one of my Ubuntu server machines. The RAID volume is formatted with NTFS. The problem is that I can not mount this volume in Ubuntu. I can read it normally if I attach it to windows machine.This is the output from "sudo fdisk -l":

sudo fdisk -l
Disk /dev/sda: 164.7 GB, 164696555520 bytes
255 heads, 63 sectors/track, 20023 cylinders[code]........

View 2 Replies View Related

Ubuntu Servers :: RAID Won't Assemble: Failed To Add /dev/sda To /dev/md0: Invalid Argument

Jul 3, 2010

My hard drives kept dropping out of the raid array, so I finally identified the problem as a bad sata cable. I redid the wires and now when I try to assemble I get this:

Code:

mdadm -v --assemble --scan /dev/md0
mdadm: looking for devices for /dev/md0
mdadm: no recogniseable superblock on /dev/sde4

[code].....

View 5 Replies View Related

Ubuntu Servers :: Raid Failed - Missing Physical Disk?

Nov 23, 2010

My raid array has failed. I have two disks /dev/sda and /dev/sdb./dev/sdb has failed and I could not rebuild the array(madm returned that the device is busy) so I rebooted the machine. After that, the whole sdb disk went missing, as it now only shows sda in fdisk -l.Did the disk went totally dead or my raid glitched?

View 8 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved