Ubuntu :: Mdadm RAID5 "appear To Have Very Similar Superblocks"
Feb 17, 2011
One of the disks in my RAID5 arrays started acting up, giving me some I/O buffer errors and making the RAID stop. disk info
Code:
=== START OF INFORMATION SECTION ===
Device Model: WDC WD10EARS-00Y5B1
Serial Number: WD-WMAV51466805
Firmware Version: 80.00A80
[Code]...
View 9 Replies
ADVERTISEMENT
Jun 30, 2011
I know you can fail and then remove a drive from a RAID5 array. This leaves the array in a degraded state.
How can you remove a drive and convert the array to just a regular, clean array?
View 9 Replies
View Related
May 3, 2010
Created my own file server/nas, but get stuck in a problem after couple of months. I have a server with 4x 1,5tb disks, all connected to sata ports and 1 40gb ata133 disk running ubuntu 9.10 x64 amd. I've created a raid5 array using mdadm. It all worked great for couple of months but lately the raid5 array is degraded. disk sdd1 is faulting every few days. I have checked the drive but it is fine. If I re-add the disk and wait for 6 hours my raid5 array is all fine again, but after a few shutdowns, it is degraded.
my mdadm detail:
Quote:
root@ubuntu: sudo mdadm --detail /dev/md0
/dev/md0:
Version : 00.90
Creation Time : Mon Dec 14 13:00:43 2009
Raid Level : raid5
[Code].....
View 9 Replies
View Related
Nov 2, 2010
I have ubuntu server 10.04 on a server with 2.8ghz 1gb ddr2 with the os on a 2gb cf card attached to the IDE channel and a software raid5 with 4 x 750gb drives. On a samba share using these drives I am only getting around 5 MB/s connected via wireless N at 216mbps and my router and server both having gigabit ports. Is a raid 5 supposed to be that slow? I was seeing speeds of anywhere from 20-50MB/s from other people and am just wondering what i am doing wrong to be so far below that.
View 4 Replies
View Related
Jun 14, 2011
I cant seem to get my RAID 5 (consisting of 8 1tb hard drives) assembled for some reason and I have no idea why and cant find any solutions online. Ill go ahead and show what my problem is:
here is all my hard drives:
Code:
server:~$ sudo fdisk -l
Disk /dev/sda: 10.2 GB, 10242892800 bytes
255 heads, 63 sectors/track, 1245 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0004f041
[Code]....
So as you can see the array for those last four look fine however for the first four it marks the last four drives as faulty for some reason. I am kind of clueless to do from this point on honestly, I have data on this array that I'd really like to save.
View 3 Replies
View Related
Jul 27, 2010
after a failed upgrade from 9.10 to 10.04 I had to format my computer and do a clean install of 10.04, and now my mdadm raid5 array wont start.my array is called "The Library", and i believe the space between "The" and "Library" is causing the command disk utility uses to start the array to fail.The exact error isAn error occurred while performing an operation on "The Library" (RAID-5 Array): The operation failed
Error assembling array: mdadm exited with exit code 1: mdadm: unrecognised word on ARRAY line: Library
mdadm: unrecognised word on ARRAY line: Library
[code]....
View 1 Replies
View Related
May 13, 2011
My fileserver initially had 3 1TB drives in RAID 5 configured with mdadm as /dev/md1. (System root is a mirrored raid on /dev/md0) I went to go add a 4th 1TB drive to /dev/md1 and grow the raid 5 accordingly. I was initially following this guide: [URL] but ran into issues on the 3rd and 4th commands. I've been trying a few things to remedy the issue since, but no luck. The drive seems to have been added to /dev/md1 properly, but I can't get the filesystem to resize to 3TB. I also am not entirely sure how /dev/md1p1 got created, but it appears to be the primary partition on the logical device /dev/md1.
Relevent information:
Code:
fdisk -l /dev/md1
Disk /dev/md1: 3000.6 GB, 3000606523392 bytes
2 heads, 4 sectors/track, 732569952 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 196608 bytes
Disk identifier: 0xda4939fa .....
The filesystem originated as ext3, I believe its showing up as ext2 in some of these results because I disabled the journal when doing some initial troubleshooting. Not sure what the issue is, but I didn't want to blindly perform operations on the filesystem and risk losing my data.
View 9 Replies
View Related
Jul 19, 2011
This is the error message I'm getting when trying to Format the mdadm RAID5 created with 4 drives
Code:
Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/md1, start=0, size=6001196531712, type=
Entering MS-DOS parser (offset=0, size=6001196531712)
MSDOS_MAGIC found
found partition type 0xee => protective MBR for GPT
Exiting MS-DOS parser
[Code]...
View 3 Replies
View Related
Feb 18, 2011
I am getting really frustrated with trying to get my RAID5 working again. I had a RAID5 array built with 4 of the Western Digital 1.5tb "Advanced Format" drives, WD15EARS. However, when copying 1.5gb dvd encoded files to the drive, I was getting speeds of ~2mb/s. When researching how to make this faster, I came across all the posts about the Advanced Format drives and how that was causing a lot of issues for a lot of people. It looked like the solution was simple enough: partition starting at sector 64 or 2048 or whatever and then recreate the RAID. However, this is not working for me.
Here are my computer specs:
Motherboard: Gigabyte GA-EP43-DS3L LGA 775 Intel P43 ATX
CPU: Intel Core 2 Duo E8400 Wolfdale 3.0GHz 6MB L2 Cache LGA 775 65W
RAM: 4gb DDR2 1066 (PC2 8500)
Video card: ASUS GeForce 9600GT 512MB 256-bit
Linux: 10.04
[Code].....
View 1 Replies
View Related
Mar 6, 2010
i was adding another disk to my raid 5, all was going well it started the reshape, got past the critical zone, worked for 20mins, but now it seems to have crashed.When i cat /proc/mdstat, or mdadm -D /dev/md0, those programes hang and dont print anything or return.from my kern.log i can see that there was an error on a disk, the raid array removed it, was going to continue the reshape but finished immediately. Anyone know what i should do?
Mar 6 16:20:26 Aries kernel: [1931119.599107] md: reshape of RAID array md0
Mar 6 16:20:26 Aries kernel: [1931119.599107] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Mar 6 16:20:26 Aries kernel: [1931119.599107] md: using maximum available idle IO bandwidth (but not more
[code]....
View 2 Replies
View Related
Jun 21, 2011
I've been playing with this for hours, and have been unable to figure it out. I tried to convert my RAID5 array of 4 active disks and 1 spare to a RAID6 with 5 active disks.
I did this:
Code:
mdadm --grow /dev/md4 --raid-devices 5 --level 6
Here is what I have on /dev/md4:
Code:
/dev/sde1 active
/dev/sdg1 active
/dev/sdj1 active
/dev/sdf1 active
removed
/dev/sdh5 spare
code....
but it tells me that /dev/sde is busy, and then that it has a bad superblock (From what I've read, I'm sure the bad superblock is just because of the "busy" message). I've tried this with the -f option, too, with no luck.
View 7 Replies
View Related
Feb 15, 2010
I have a problem with my mdadm RAID. I wanted to know if anyone had any experience with shrinking RAID5 arrays. I was growing the array from 5 to 6 devices however the grow got interrupted and it has recovered to 5 drives. The 6th drive is toast and I am unable to re add it to the system. I would like to drive the device listed as "removed". I have tried mdadm /dev/md0 --remove detached and failed with no success. I am running Ubuntu kernel 2.6.28-11 and mdadm is v3.1.1.
Here is output of a "mdadm -D dev/md0"
/dev/md0:
Version : 0.90
Creation Time : Wed Jan 12 00:46:41 2009
Raid Level : raid5
Array Size : 4883812480 (4657.57 GiB 5001.02 GB)
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Raid Devices : 6
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Feb 15 20:25:07 2010
State : active, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 74fa5199:84b88e81:4ae0fbae:92643084
Events : 0.1331010
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 0 3 active sync /dev/sda
4 8 64 4 active sync /dev/sde
5 0 0 5 removed
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb[0] sde[4] sda[3] sdd[2] sdc[1]
4883812480 blocks level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_]
unused devices: <none>
View 4 Replies
View Related
Feb 2, 2010
Something weird happened last night and my raid5 failed. I am trying to re activate it and see if my data is dead or what. When I run mdadm -Asv /dev/md0 I get
Code:
mdadm: looking for devices for /dev/md0
mdadm: cannot open device /dev/dm-1: Device or resource busy
mdadm: /dev/dm-1 has wrong uuid.
mdadm: cannot open device /dev/dm-0: Device or resource busy
mdadm: /dev/dm-0 has wrong uuid.
mdadm: cannot open device /dev/sde2: Device or resource busy
mdadm: /dev/sde2 has wrong uuid.
mdadm: cannot open device /dev/sde1: Device or resource busy
mdadm: /dev/sde1 has wrong uuid.
mdadm: cannot open device /dev/sde: Device or resource busy
mdadm: /dev/sde has wrong uuid.
mdadm: cannot open device /dev/sdd: Device or resource busy
mdadm: /dev/sdd has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
View 4 Replies
View Related
Jan 17, 2011
Relatively inexperienced user using Linux/ubuntu. Not too savvy I admit and like to use GUI as much as possible. Not a great fan of the Terminal window... I have installed a couple weeks ago Ubuntu 10.10 (Desktop Edition) using Alternative install disk (don't ask why!) on 4Gb usb stick. Working fine except one thing with the raid array. I have created a raid5 array made of 6 drives using GUI (Disk Utility). After an expansion of the array (or was it a reinstall of the OS, I can't remember exactly?) the array does not autostart anymore. Of course nor does it automount anymore.
THE WEIRD THING is that I can still start it MANUALLY from the "Disk Utility" GUI after two tries. And it works just fine thereafter!!! The first time i try to start it gives an error (something about /dev/md0_127 being not ready or buisy). THE SECOND TRY ALWAYS WORKS like a charm, the array starts and i can mount it just fine. Here is a screenshot: I have also noticed that there is no entry in fstab for /dev/md0 although I can manually mount it using the same Disk Utility GUI. That is strange to me. Is it normal? i could easily add it manually but Ubuntu it won't boot anymore (i tried and failed, hence the reinstall). I tried for two weeks to find a solution browsing on different forums but the problem is beyond my expertise...
BELOW are further details about my configuration mdadm.conf, fstab, fdisk -l result and other info. I don=t want to loose my data but it would be nice to make this thing work and be able to access my fileserver via vnc instead of having to keep it connected to a lcd monitor as now. This is the blkid result:
[Code]...
View 3 Replies
View Related
Mar 12, 2011
I'm trying to find out which one is safer when it comes down to recovery process in case of a drive failure
A RAID5 created in mdadm
or
a Stripe RAID created on pure LVM
the RAID is purely for data storage for a SAMBA server, the OS will reside on its own drive.Ideally the RAID physical hard drives should be re-build on another machine in case of catastrophic server failure (mother board problem, or any other random problems as an example)I can't decide which of the 2 software RAID method is more convenient and safest, don't care about performance, it'll be a dedicated server for mass storage, it's going to mirror other 3 file servers on fakeRAIDs (dmraid), it's simply a redundant backup for the backups
The important goal here is portability.from what've read it appears that LVM might be more portable?but according to some dated (2009) info the mdadm seems to be a bit buggy when it comes to rebuilding the array, yet LVM doesn't appear that safe either which one would you pick for ease to rebuild on catastrophic failures?
View 2 Replies
View Related
Jan 25, 2010
I have an old Athlon XP 3000 machine that I keep around as a file server.It's currently got three 1TB drives which I had setup as mdadm raid 5 on FC10. The machine's original drive held the superblock for the raid array and it just had a massive heart attack. I've searched, my biggest source being URL...I can't tell if I can reassemble the superblock info lost with the original hard drive or if I've lost it all...
View 9 Replies
View Related
Mar 2, 2010
When I inserted a 1 TB-drive into my new Qnap NAS, it removed my partition on that drive without notifying me first.Is there a way to retrieve the actual locations of the backup superblocks, or any other ways to guess their locations?
View 11 Replies
View Related
Jan 9, 2010
I have no drive failures but just need to recreate a raid5 set as the next free MD disk number. Originally I built a temp OS of debian on a single drive and had 4x2TB drives in a raid5 software array (MD0) this worked fine and allowed me to move all data to it, and remove our old fileserver. I have now pulled out the 4 x 2TB Raid 5 drives and created a new OS on two new 80GB drives, partioned as follows,
MD0 is now 250mb Raid1 as /boot
MD1 is 4GB Raid1 Swap
MD2 is 76GB Raid1 as /
If I turn off and push back in the 4x2TB drives I cannot see a MD3. I presume I would need to create a MD3 from these 4 drives but I dont want to mess things up as its live data. So im here asking for help, or a bit of hand holding to get it done right.
PS - Its a Debian Lenny 5.0.3 Raid1 fresh install replacing a Debian Lenny 5.0.3 on a single disk.
View 2 Replies
View Related
Feb 7, 2010
I know there are many threads about recovering damaged superblocks. I've spent 3 evenings reading them and trying what they suggest. Invariably the commands do nothing except to report bad or missing superblocks. I've removed the physical disk from the machine and am working with a dd image file (/mnt/image). I can mount what used to be hdc1 and read its files with no problem. I'm trying to recover partions hdc6 and hdc7.$ mmls /mnt/image -b
DOS Partition Table
Offset Sector: 0
Units are in 512-byte sectors
[code]....
View 1 Replies
View Related
Jun 27, 2010
I have an image of an ext3 file system done with dd. I know that the file system is corrupted but I want to try to recover some files. Whatever I dd it again to the original partition or assign the dd image to a loop device, that's what happens:
- dumpe2fs -h gives me a valid ext3 superblock.
- as I try to mount the device read only, it fails with a bad magic number error.
- executing dumpe2fs -h again gives bad magic number error.
- trying debugfs or fsck with backup superblocks fails the same way.
For me it seems that in spite of mounting the device as read-only, mount command do something wrong with the superblock as before the mount the superblock is correct and it's there.
View 4 Replies
View Related
Nov 22, 2009
Here's a brief description of my system:
120GB Sata HDD - Primary OS drive
3 x 1.0TB Sata HDD - Raid 5 array
This is on a C2D MSI P35 Platinum board. Anyway, did a fresh install of F12 on the 120GB, which I had problems with - Anaconda refused to see the drive. Fedora Live could see it fine, and it was listed as an 'nvidia_raid_member' - no idea why, but I completely erased the disc under the Live CD and proceeded to install F12.
Once F12 was installed, I loaded up mdadm to re-activate my Raid 5 array, using 'sudo mdadm --assemble --uuidthe uuid) - and it started with only 2 of the 3 drives. My /dev/sdb drive did not activate into the array, due to what mdadm said was a mismatched UUID. Ok, so I erased /dev/sdb, intending to rebuild the array. Erased /dev/sdb, and then attempted 'sudo mdadm --add /dev/md0 /dev/sdb' and I get this error: "mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container" - I can find NO information on this error message.
[Code].....
I don't believe the hard drives are connected in the exact same order they were in before - I disconnected everything in the system and blew it out (it was pretty dusty)
View 1 Replies
View Related
Jan 11, 2010
I am planning on setting up a 4x1TB RAID5 with mdadm under Ubuntu 9.10. I tried installing mdadm using "sudo apt-get install mdadm", all worked fine except for the following error: Code: Generating array device nodes... /var/lib/dpkg/info/mdadm.postinst: 170: /dev/MAKEDEV: not found failed. The end result is the /dev/md0 device has not been created, as can be seen here:
Code: windsok@beer:~$ mdadm --detail /dev/md0 mdadm: cannot open /dev/md0: No such file or directory After googling, I found the following bug which describes the issue: [URL] However it was reported way back in April 2009, and it does not look like it will be fixed any time soon, so I was wondering if anyone knows a workaround for this bug, to get me up and running?
View 4 Replies
View Related
May 23, 2011
I have created a 9TB raid from 4 1.5TB drives and 3 2TB drives (1.6 and .4 partition). I thought it would be a 9TB partition, and Ubuntu says it is a 9TB partition except when looking at what drive space is left.Nautilus' Properties and System Monitor both say that the raid is 4.1TB with 1TB free but Disk Utility and Nautilus say it is a 9TB RAID. very odd. I have tried checking the raid and the xfs file system. no errors.here is from watch cat /proc/mdstat
Code:
md1 : active raid5 sdg1[5] sdf1[6] sde1[4] sda[0] sdd[3] sdb[2] sdc[1]
8790830976 blocks level 5, 4k chunk, algorithm 2 [7/7] [UUUUUUU]
[code]....
View 9 Replies
View Related
Apr 15, 2010
Here is what I have and what I want to do.
3 new 1.5TB HD. 1 used 1.5TB hd with 980MB of data. I want to set up a raid 5 with a hot spare. I have music, pictures, videos, and movies (About 2.8TB worth). I have had a mismatch of drives previously, 250GB, 2 320GB, 500GB, 2 1TB and now a 1.5TB all with data. I have removed the one 250 and 2 320s and put the data on the 1.5TB that is currently installed.
What I would like to do is create a raid5 with the three new 1.5TB HD's, copy the data over from the currently installed 1.5TB and then grow or add that drive as a hot spare. Or just add it and then add another 1.5TB down the road as a hot spare don't know for sure.
In addition since I have 2 1 TB drives, I could add 2 more (Good deals on 1 TB drives right now) and have a total of 4 1TB drives. Could I have 2 raid5's (4-1TB's and 4-1.5TB's)in two separate arrays? I really do not know if that makes sense or not but here comes LVM. I am tired of managing my HD space and since i have multiple folders (Movies, music, pictures, videos) and within the movies folder I have R, G & PG folders for the ratings of the movies. (Pwd protect the R so the kids can't get to it) So with LVM installed with the Raid5 I should be able to create my folders and just keep adding data and not worry about moving folders around when I grow the storage by adding new drives. Is that correct? Maybe someone could point me to a how to.
Also, if I create 2 arrays (And I need to know so I can order the 2 additional 1TB drives), then I could put all the music, G and PG content on the one array and all the R and spicy stuff on the other and password protect it.
View 4 Replies
View Related
Jun 10, 2011
I am trying to build a new array after adjusting TLER on my disks, which permanently changed some of the drives sizes. I am not sure if the following inconsistencies are related to the newly mismatched drive sizes.
Using:
Code:
mdadm --create --auto=md --verbose --chunk=64 --level=5 --raid-devices=4 /dev/md1 /dev/sdd /dev/sde /dev/sdf /dev/sdg
Nets me (build-time was two full days):
[Code]....
On a side note, since I'm recreating my array from scratch, I was wondering if anyone here knows of any optimized settings I could use. I've got 3Tb of data to transfer, so lots of test material.
These are Western Digital First Generation 2TB Green Drives (WD20EADS-00R6B0) with WDidle3 fix applied & TLER=ON. These are pre Advanced Format (aka not 4K).
Code:
mkfs.ext4 -E stripe-width=48,stride=16 /dev/md1
View 9 Replies
View Related
Jan 8, 2010
I used a raid 5 of 5*1TB via Kernel raid on md0. I then created a luks out of md0 -> /dev/mapper/md0. I formatted ext3 this container. That worked fine. Now I baught another Harddrive and grew the raid to now 6 Devices. The Raid is now running und bigger than before - great. The luksOpen still works BUT I can't mount the ext3 on it. It seems like the luks Volume also grew, it's now 1TB bigger without doing anything. The Problem: I can't mount the luks anymore, mapping it was no big deal so.. Always tells me that I should specify the filesystem type:
Code:
root@ubuntu:/home/ubuntu# mount /dev/mapper/md0 /mnt/tmp
mount: Sie missen den Dateisystemtyp angeben
If I do is then it continues to give me errors:
Code:
root@ubuntu:/home/ubuntu# mount /dev/mapper/md0 /mnt/tmp -t ext3
mount: wrong fs type, bad option, bad superblock on /dev/mapper/md0,
missing codepage or helper program, or other error
Manchmal liefert das Syslog wertvolle Informationen - versuchen
Sie dmesg | tail oder so
It seems as if the luks system doesnt know that the luks container isn't using the whole drive.
View 2 Replies
View Related
Mar 5, 2010
I recently installed a new home backup server with Ubuntu 9.10 x86_64 using the alternate CD. I used the CD's installer to partition my disk and created a software RAID 5 array on 4 disks with no spares. The root file system is located outside the raid array.
At first the array performed nicely but as it started to fill up, the io performance dropped significantly to the point where I get a transfer rate of 1-2MB/s when writing!
[Code]...
View 9 Replies
View Related
Dec 31, 2010
I recently went shopping at newegg for components for a new home server to fulfill multiple needs--chief amongst them to replace an aging old Athlon box which has been running a 4TB software raid for years now So I ordered an ASUS P7P55D-E LX mobo, 8GB DDR2, a core i5 quad-core, and 6 WD "green" 2TB drives (I'm more concerned with capacity than speed). I got it all assembled, using a 500GB 2.5" drive as the boot / host drive, a blu-ray reader, and splitting the 2TB drives amongst the 2 controllers (4 on the remaining ports of the x6 SATA controller and 2 on the x2 SATA-III controller, just not needing the SATA III capabilities). I plan to use Software RAID5 rather than the fakeraid that might be supported by the chipset. This affords maximum portability for the future.
At any rate, I installed 10.10 64-bit (Desktop), let it update, then installed mdadm and set about configuring the RAID5. I decided to use the nifty graphical tool. Last time I did this on my old Fedora box, it was all CLI.All seemed to be going well, and I was letting it fully recover / synchronize before setting up a filesystem (planning on ext4 this time). I kept an eye on it with cat /proc/mdstat.
After 20 hrs (averaging 25MB/sec on the sync...seems a bit slow, but who knows?)...it finishes and the RAID is immediately degraded with one volume (/dev/sdc1) marked as faulty and another volume marked as a spare. I replaced the SATA cables with brand new ones as long ago I ran into an issue like this due to a bad cable. Before trying again I ran the extensive SMART test and a read/write benchmark on each drive. They all checked out fine. So I started over and 20hrs later, same thing.So I shut down and removed the offending drive and started with only 5 drives. Another 19hrs go by and this one turns up /dev/sde1 faulty and another drive marked as a spare. That drive was fine before, and is on a different controller from the one that it was finding as faulty before.
Now I am suspicious that there is a larger underlying issue and that the drives are not bad at all. Should I have just gone with a 32-bit + PAE install? Does the 64-bit have known issues with the software RAID? Should I have just built it using the CLI? Looking online, it seems people are having success with the GUI tool. I don't think it is the GUI tool. Checking with mdadm, it seems everything is being configured right.
View 2 Replies
View Related
Mar 25, 2011
i've just triede setting up my new raid with 4 discs however it wont let me create the raid?
Running my harddrives through a
Quote:
Promise SATA300 TX4 interface/controller
Thats my create command:
Code:
sudo mdadm --create --verbose /dev/md0 --level=5
raid-devices=4 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
Error msg
Quote:
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 64K
mdadm: /dev/sdc1 is too small: 0K
mdadm: create aborted
if i try and do
Code:
sudo fdisk /dev/sdc
and tjek partitions with
code....
View 1 Replies
View Related
May 17, 2011
I am setting up a Raid5 and torture testing it. I added two eSata ports to my machine. When a drive is installed in that eSata port and the machine then booted up the device name (e.g. /dev/sdc) is inserted in the middle of my Raid devices. And that is just one example of how the device names can change.I did a search on 'static device names' but I saw nothing directly related to Raid. What I did see were suggestions to create udev rules based on UUID. But that was for single disks, not Raid, where each drive/partition in the raid array appears to have the same UUID.I'm surprised this does not come up in the various Raid howtos because it is impossible to keep a Raid array intact without solving this problem unless the machine is never touched thereafter.
View 1 Replies
View Related