Debian Installation :: No Boot From RAID Array?

Dec 22, 2010

I installed Debian 5.0.3 (Backport with .34 Kernel), because my server hardware (Dell PowerEdge R210) needs special firmware and drivers.However, the installation went quite smooth.I put the system on a RAID 1 Array with about 500 GB space.s I said the installation went well, however, it doesn't boot! No GRUB, nothing

View 4 Replies


ADVERTISEMENT

Ubuntu Installation :: RAID Array Will Not Boot (x64)?

Sep 15, 2010

I am using the 10.04.1 x64 Kubuntu live CD to install Kubuntu on my FakeRAID 0 array, I tell it not to install grub as i know it is still currently broken. the install goes flawlessly. However on first boot using my live grub CD unless i tell my computer to point to the CD it will hang (which it is told to boot from CD first so i'm not sure why it does.) When i tell it to boot to Linux, it will not boot saying the kernel is missing files (to much to sadly list, all i do not understand) then offers me a terminal to input "help" into for a list of Linux commands. Windows 7 pro x64 works just fine CD was downloaded VIA P2P if it matters

View 6 Replies View Related

Ubuntu Installation :: Dual-boot On A RAID-0 Array - Drops To GRUB Command Line?

May 28, 2011

I've recently had trouble reinstalling my Ubuntu system as I was getting various unusual errors as described in my old thread here. I thought it was probably something to do with my RAID-0 array which was pre-installed on my laptop from purchase being corrupted or something like that (if it's possible). I decided to simplify things for myself (not understanding RAID arrays much) so I just removed the RAID array and installed Windows and Ubuntu on the now separate hard disks. It worked fine.

I noticed quite a significant performance drop, however, with even Ubuntu boots taking longer than 30 seconds despite my laptop being both high-spec and only a few months old. Windows, as you can imagine, was dreadfully slow. I wasn't entirely convinced that this was entirely due to the loss of the RAID array - as even low-spec laptops with presumably no RAID arrays are supposed to boot Ubuntu in under 30 seconds apparently - but I read that RAID-0 arra

View 8 Replies View Related

Software :: RAID Array Broken On Boot?

Jan 18, 2010

I have an issue with a RAID array failing on boot. Seems like an issue with the file system. I get passed the RAID BIOS (and from what I can see, it looks alright there, all devices appear), but then the following error messages appear:

Code:

raid5: failed to run raid set md0
mdadm: failed to RUN_ARRAY /dev/md0 input/output error
mdadm: Not enough devices to start the array

and further down:

Code:

fsck.ext3: Invalid argument when trying to open /dev/md0
/dev/md0:
The superblock could not be read or does not describe a correct ext2

[code]....

I then login with root password to get a "Repair file system" prompt. Tried with fsck, but not working. It's 4x1TB in RAID5 on HighPoint RocketRAID 2300 4P SATA II/300, with Fedora 9. Not sure what other system info might be needed.

View 5 Replies View Related

Slackware :: RAID Array Not Detected On Boot?

Aug 16, 2010

I have a raid array level 5 with metadata 1.2 made with mdadm. I put it on /etc/fstab to mount it on boot but it doesn't works because the raid is not detected on boot. I have a /etc/mdadm.conf like this:

Code:

ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 name=0 UUID=afdfe00e:0d18a5eb:29aa54f9:8b422ee0

Just another thing... After the command

Code:

mdadm --detail --scan >> /etc/mdadm.conf

The mdadm.conf is like this:

Code:

ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.02 name=0 UUID=afdfe00e:0d18a5eb:29aa54f9:8b422ee0

But I change manually the metadata version because the 1.02 give me a error. I don't know if it is a bug or what! Beside this. I have to put a line in /etc/rc.d/rc.local to assemble the array.

Code:

mdadm --assemble --scan --uuid=afdfe00e:0d18a5eb:29aa54f9:8b422ee0

And after that I already can mount it. Why the array is not detected on boot? Is because metadata type is prior to 1.00? Can I put the line I have on /etc/rc.d/rc.local to assemble the array in another file, that will be executed before /etc/fstab?

View 5 Replies View Related

Ubuntu :: Unable To Boot Due To Changed UUID In RAID Array

May 9, 2010

I'm running 64bit Lucid. I've recently had a severe problem with my softraid (5) array, and have had to recreate the array to fix it. However this now means that something is up with GRUB/initramfs, and booting times out while waiting for the root device (md0) to be ready. /boot is on a normal partition, not the raid array itself. A friend of mine has rebuilt my initramfs file with the new UUID, but now I get the message: 'Kernel panic not syncing: VFS: unable to mount root fs on unknown-block (9,0)'.So my question is either how do I sort this error, OR how do I rebuild initramfs/grub in a way that will boot?

View 6 Replies View Related

Ubuntu Servers :: Raid Array Incorrectly Assembled On Boot

Feb 20, 2011

I've got a couple of new hard disks that I have partitioned (3 partitions per disk) and set up in a mirrored software raid array using mdadm. They've synced, I've put file systems on them (1 x ext4, 2 x luks + ext4) and I can mount them. I've checked the partitions using fdisk. I've checked the filesystems using fsck. So far so good. Next step is that I'd like mdadm to automatically assemble them on boot. (Not bothered about mounting and crypttabing yet.)

I've used sudo /usr/share/mdadm/mkconf to generate a new mdadm.conf with the appropriate UUIDs for the new partitions. I've checked that this matches the output of sudo mdadm --detail --scan

The new lines in this file are:

ARRAY /dev/md9 level=raid1 num-devices=2 UUID=470fb8a6:45561fe0:ebda4a02:9ba7a1ed
ARRAY /dev/md10 level=raid1 num-devices=2 UUID=f351fbba:c704a4b2:ebda4a02:9ba7a1ed
ARRAY /dev/md8 level=raid1 num-devices=2 UUID=c6ccec17:2274588e:ebda4a02:9ba7a1ed

To check that the mdadm.conf is fine I have stopped the new arrays:

[Code].....

View 7 Replies View Related

Debian Configuration :: RAID Array Not Accessible

Aug 29, 2015

Just setup with Debian 8 (LXDE) a few weeks ago. Raid10 array was preexisting.

Was working well. After booting I would need to go to the save as then would need to enter the root password and everything would be good.

Can't access the array.

Used to use the command $ mount /dev/dm-o /home/myspace/folder under Debian 7.6 to mount the array (no longer works).
blkd lists a /dev/md0 but instead of UUID it is PTUUID

[Code] .....

View 0 Replies View Related

Software :: RAID 5 Array Not Assembling All 3 Devices On Boot Using MDADM - One Is Degraded

Aug 31, 2010

I have been having this problem for the past couple days and have done my best to solve it, but to no avail. I am using mdadm, which I'm not the most experienced in, to make a raid5 array using three separate disks (dev/sda, dev/sdc, dev/sdd). For some reason not all three drives are being assembled at boot, but I can add the missing array without any problems later, its just that this takes hours to sync. Here is some information:

[Code]....

View 11 Replies View Related

Debian Configuration :: Reorganizing Disks In MD RAID Array

Mar 4, 2010

I'm trying to do some RAID managing with mdadm. I would like to sync my spare disk and then remove it from the array for making a backup out of it with dd command (the best way i can think of to get the current image of the whole system as it can't be done using the active RAID as source, because is constantly in use and changing). So, I have RAID1 array with 1 spare and 2 active disks (configuration listed below). Now I would like to force spare to sync and then remove it from array, although not faulty.

However, mdadm man page states:
"Devices can only be removed from an array if they are not in active use. i.e. that must be spares or failed devices. To remove an active device, it must be marked as faulty first."

So, I'd have to mark a disk as faulty (which it is not) to be able to remove it from array. There seems to be several people reporting that they can't remove this faulty flag accidentally given to a drive. And mdadm does not give direct for such operation. Isn't there a way I could remove and add disks whenever feeling like it?? One way would be open the cover and physically remove the disk. I'm not taking the risk, though. System is almost always in use, so there is not much chance for me to power off for temporary disk removal.

RAID CONFIGURATION:
~# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri Aug 4 17:38:26 2006
Raid Level : raid1
Array Size : 238950720 (227.88 GiB 244.69 GB)

View 3 Replies View Related

Debian Configuration :: Use A Whole Disk Or A Partition In RAID Array?

Aug 31, 2010

concerning Linux, mdadm, and creating RAID Array's in Debian. I've done a lot of reading and research on RAID both on this board and elsewhere (The Linux Documentation Project's Software-RAID HOWTO is especially good), but I've run across something that no one seems to explain, and I'm not sure why. I'm instructed to create partitions on the drives I wish to add to my array. These partitions inevitably take up the whole disk, and are always have their system IDs set to "Linux raid autodetect". What I don't understand is why, after creating these partitions, some guides then go on to create an array (say a RAID5 one) with just the disks themselves as members, while others go on to create the RAID5 array with the previously created partitions as members. E.g.,

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
vs.
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

What's the advantage of using one over the other?

View 3 Replies View Related

Debian Configuration :: Setting Up A RAID 1 FAt32 Array?

Jul 2, 2011

New to linux in general and am having issues on setting up a Raid 1 array for two disks on an HP Proliant Microserver which I am looking to be accessible from my windows PC. I have installed the latest version of debian succesfully on a 250GB disk that came with the server. I have added 2 2TB disks which I would like to have in a RAID 1 array and to have visible from windows to store music/videos etc on. I have managed to partition the two disks to FAT32 (which I think is best) and have managed to configure the array so that it shows as active when I use cat /proc/mdstat. I have been following the steps in this article [URl]... squeeze-p2 and trying to adapt it to my situation.

I am stuck on the step to create the file systems using the mkfs command. I try mkfs.vfat /dev/md0 and it comes up with the error mkfs.vfat: command not found. I have tried mkfs -t vfat /dev/md0 and it give the error "mkfs.vfat: No such file or directory" So my question is how can I continue with the process of setting up the array? Or maybe I should be asking is it possible to set up an array with FAT32 formatted disks?

View 2 Replies View Related

Debian :: Setting Up A RAID Array With Multiple Partitions

May 23, 2011

I need to set up a RAID 1 array on Squeeze. I have 3 partitions: sda1 is root, sda5 is home, and sda6 is swap. (sda2 is the extended partition containing home and swap. This was a clean installation, so I don't know what happened to sda3 and sda4...)

All the information that I've been able to find recommends doing something like this:

mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1

Do I need to type a separate command for each partition, or is there a better way to do it? Also, should I use the UUID instead of the dev names?

View 4 Replies View Related

Ubuntu Servers :: Creation Of RAID-0 Array In Disk Utility Resulting In Smaller Than Expected Array?

Sep 27, 2010

I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).

The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:

160 + 250 + 250+ 750 + 250 +200 + 200 + 250 + 320 + 250 + 320 = 3.2TB

Am I missing something or making a false assumption somewhere?

View 4 Replies View Related

Fedora :: Boot Can't Access Ext4 Partitions On LVM Logical Volumes On RAID Array?

Feb 8, 2011

i have a fedora 11 server which can't access the ext4 partitions on lvm logical volumes on a raid array during boot-up. the problem manifested itself after a failed preupgrade to fedora 12; however, i think the attempt at upgrading to fc12 might not have anything to do with the problem, since i last rebooted the server over 250 days ago (sometime soon after the last fedora 11 kernel update). prior to the last reboot, i had successfully rebooted many times (usually after kernel updates) without any problems. i'm pretty sure the fc12 upgrade attempt didn't touch any of the existing files, since it hung on the dependency checking of the fc12 packages. when i try to reboot into my existing fedora 11 installation, though, i get the following screen: (click for full size) a description of the server filesystem (partitions may be different sizes now due to the growing of logical volumes):

Code:

- 250GB system drive
250MB/dev/sdh1/bootext3
lvm partition rest of driveVolGroup_System
10240VolGroup_System-LogVol_root/ext4

[code]....

except he's talking about fake raid and dmraid, whereas my raid is linux software raid using mdadm. this machine is a headless server which acts as my home file, mail, and web server. it also runs mythtv with four hd tuners. i connect remotely to the server using nx or vnc to run applications directly on the server. i also run an xp professional desktop in a qemu virtual machine on the server for times when i need to use windows. so needless to say, it's a major inconvenience to have the machine down.

View 1 Replies View Related

Debian Configuration :: Can't Correctly Install Grub On RAID Array

Nov 26, 2015

I'm having issues with a RAID array.

Setup is like this:

Debian Jessie, 2 hard disks, each having 2 partitions: /dev/sda1, /dev/sda2, /dev/sdb1, /dev/sdb2. Partitions were paired during installation, so they form /dev/md0 and /dev/md1. /dev/md0 is the root (/) partition, /dev/md1 is for /home.

At the end of the install process, I chose /dev/sda1 to carry Grub. And I think this is where I screwed things up.

After removing one of the hard drives, there was no boot capability. So, I installed Grub on /dev/sdb, too.
Now it displays the boot menu but cannot find the kernel. This is where I got lost in the process.

Do I need to reinstall the OS or is there a way to fix it? I suppose I have to edit Grub.

View 0 Replies View Related

Debian Hardware :: RAID As Multiple Disks - Configuring Array?

Dec 2, 2010

Alright, I have this issue on both SystemRescueCD and Debian Squeeze. I have an ASUS P5Q Turbo board that supports hardware RAID. If I configure an array and then start the Linux installer or boot the rescue CD, I get /dev/sda and /dev/sdb instead of an array. What gives? I need to start installing within the hour so I am desperate for an answer!

View 1 Replies View Related

Ubuntu Installation :: Raid Array Not Available After Upgrade?

Jul 11, 2010

After upgrading my ubuntu install my raid array is gone. The drives appear in blkid as "Linux raid member" and both have the same uuid. If I try to mount the drive via fstab I get a message that the drive is not ready or present. If I try to mount each of the two drives, one mounts successfully the other reports serious errors. Issuing a cat /proc/mdstat shows md_d0 as inactive.How can I re-establish my raid array? I have the data backed up so if I have to wipe out the disks to start over that's an option.

View 2 Replies View Related

Ubuntu Installation :: How To Keep RAID Array Safe

Aug 6, 2010

I currently have a nice HTPC setup that has been upgraded from distribution to distribution since 8.xx all the way up to 9.10 now. I just moved to a new place and it feels like the right time to do a fresh install of 10.04 into the HTPC. The problem is that I have a RAID 5 array in the system that has all my pictures, videos, music, etc. This OS is installed in a separate drive that is not part of the RAID array (I have 4 drives in the system, 3 in the array, 1 for the OS). what is the general process I should follow to do:

1. a fresh install of 10.04

2. do #1 while at the same time not losing my array (don't think I would anyway).

3. what to do after install to get the array back up and running and mounted.

View 4 Replies View Related

Ubuntu Installation :: Repairing Mbr On Raid Array?

Nov 27, 2010

repairing the MBR on my raid array. I have three disks, each with three paritions:root (sda1 sdb1 sdc1) 59GB swap (sda2 sdb2 sdc2) 1.12GB grub/boot (sda3 sdb3 sdc3) 298MB I have been able to get this running and it has been working fine for several months. A few days ago, I installed 10.04 to a USB stick but did not disable the hard drives at that point and so the MBR was overwritten. If I leave the USB stick in, it boots fine from that stick. However now I can't get the boot from the raid array to work correctly. I can do the following:Load 10.04 from the Live CD install mdadm recreate the root partition using

Code:

mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1

I can mount and view the files on md0 with no problems. It's not corrupted in any way. When I installed, I followed the directions to make each of the grub drives bootable. However I don't know for sure whether grub was installed on each partition separately or if it was installed on the assembled partition only. I have tried using

Code:

sudo grub-install /dev/sda3

and got warnings, something to the effect

Code:

Cannot find a device for /boot/grub
no path or device specified
Auto-detection of a filesystem module failed
specify the module with option '--module' explicitly

I have also been able to get to the grub rescue prompt but my keyboard (wireless USB) is not recognized and so I can't type anything in at that point.

View 8 Replies View Related

Ubuntu Installation :: Merge 1TB Disks Into And RAID 5 Array?

Apr 11, 2010

I wanted to merge my 1TB disks into and RAID 5 array, 4 of them in RAID 5 is above 2Terabytes limit of msdos partition tables which grub2 can boot from, so I decided to start up the system from scratch, by building it on GPT partitions, but seems grub2 won't boot from GPT partition because it drops to grub rescue and I can't really do anything from there.

here's my set up:

/dev/md0 (raid 1) - 100MB total:
- dev/sda1, /dev/sdb1, /dev/sdc1, /dev/sdd1
/dev/md1 (raid 5) - 45GB total:
- dev/sda2, /dev/sdb2, /dev/sdc2, /dev/sdd2
/dev/md2 (raid 5) - something bit lower than 3TB:
- dev/sda3, /dev/sdb3, /dev/sdc3, /dev/sdd3

any tips how to have this system up and running? Because I've spent like 3 days jumping over various problems

View 8 Replies View Related

Ubuntu Installation :: Raid 0 - Two Hard Disk Array

Jul 8, 2010

What is the best way to install Windows and Linux on two-hard-disk array? In fakeraid there are no problems in Win, but linux installation is almost impossible (i've tried unsuccessfully...). In software raid it would be impossible to share files between win and linux? And finally hardware raid is possible, but cheap controllers have low performance. Is there any other way (apart from spending a lot of $$ for adaptec controller) ?

View 1 Replies View Related

Debian Installation :: Possible To Boot Off Of RAID 5?

Jan 8, 2011

I am looking to build a server with 3 drives in RAID 5. I have been told that GRUB can't boot if /boot is contained on a RAID arrary. Is that correct? I am talking about a fakeraid scenario. Is there anything I need to do to make it work, or do I need a separate /boot partition which isn't on the array?

View 3 Replies View Related

Software :: Rebuilt RAID Array With Old Mount Points Present - File System Check Fails On Boot

Dec 2, 2009

I have one hard disk for my root partition and a disk array on a separate mount point. I rebuilt my disk array, but I didn't delete my original mount points beforehand because I was hoping it would just "pick up". So now when I boot up, the OS tells me that the filesytem check fails because it can't find the array to map to the mount point. I know that I need to edit my /etc/fstab and remove the line that defines my mount point on the disk array. But it appears to be read only filesystem when I am in repair mode. I can't force the write with vi.

View 3 Replies View Related

General :: Installation On Software RAID 1 Array Running Vista 32-bit

Jan 23, 2011

I'm currently using Windows Vista 32-bit on a RAID 1 array; I'm using the RAID provided by my motherboard so it's fakeRAID. Anyway, I'd like to do some C development under Linux but I'm not exactly sure how to go about installing it on a software RAID 1 array without messing up Windows. I'm not sure which Linux distro I'm going to install, so I'm hoping that information isn't important. Would I just resize my Windows partition and put Linux on the newly created partition? Do I have to worry about where Linux will put its bootloader or will it manage that on its own? I didn't mean software RAID, I meant fakeRAID.

View 1 Replies View Related

Ubuntu Installation :: Unable To Install Grub On RAID Array

Feb 3, 2011

I'm trying to switch to a new RAID5 array but can't get it to boot. My disks:/dev/sda: new RAID member

/dev/sdb: Windows disk
/dev/sdc: new RAID member
/dev/sdd: old disk, currently using /dev/sdd3 as /

The RAID array is /dev/md0, which is comprised of /dev/sda1 and /dev/sdc1. I have copied the contents of /dev/sdd3 to /dev/md0, and can mount /dev/md0 and chroot into it. I did this:

Code:

sudo mount /dev/md0 /mnt/raid
sudo mount --bind /dev /mnt/raid/dev
sudo mount --bind /proc /mnt/raid/proc

[code]....

This completes with no errors, and /boot/grub/grub.cfg looks correct[EDIT: No it doesn't. It has root='(md/0)' instead of root='(md0)']. For example, here's the first entry:

Code:

### BEGIN /etc/grub.d/10_linux ###
menuentry 'Ubuntu, with Linux 2.6.35-25-generic' --class ubuntu --class gnu-linu
x --class gnu --class os {

[code]....

However, when I try to boot from /dev/sda, I get:

Code:

error: file not found
grub rescue>

View 9 Replies View Related

Ubuntu Installation :: Software Raid-5 - Error Not Enough Components To Start The Array

Aug 10, 2010

I have an HTPC that was giving me insane amount of problems after 3 months of good use.

1x 250gig Samsung Drive (OS Drive)

3x 1TB Western Digital Caviar Green (Raid-5)

In 9.10, the raid was working fine. I decided to fresh install Ubuntu 10.04 and I can't seem to start the raid array. In Disk Utility, the array shows up but when I try to start it I get the error "Not enough components to start the array"

I've tried to assemble the array using mdadm and the following:

mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdd1

This returns the following error mdadm: failed to create /dev/md0

I have no idea what to do now, and unfortunately I don't have any backups

View 1 Replies View Related

Ubuntu Servers :: Mounting Existing RAID Array On Fresh Installation?

Aug 1, 2011

I'm running 10.04 x86 server with a really simple installation on a single 250GB boot disk. I then have a RAID5 array as /dev/md0 (set up using mdadm with x4 2TB disks). All is working well. My mdadm.conf file looks like this

Code:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.

[code]....

if I was to lose the boot disk and need to remount the RAID array on a fresh installation, what steps do I need to go through. My assumption is that the superblocks on the RAID disks will be used and I don't need to keep any additional information - is this right?

View 6 Replies View Related

Debian Installation :: Will Not Boot After Install On Areca ARC-1110 RAID

Dec 1, 2010

I performed an install using the 5.0.6 amd64 netinst cd on a dual opteron server with an Areca ARC-1110 4-port SATA hardware RAID card. I have 2 250GB drives set up as RAID 1. The debian install saw it as only one drive, just as it should. Install went smoothly, but on reboot, the system would not load.

I did some research and tried a couple of things with no luck. Like adding a delay in the grub command. It jus sits at loading system for a while then times out and loads busy box. Just to check things out, I booted into an Ubuntu live-cd and mounted the volume. The file system is there and all of the necessary files. How to use one of these cards successfully?

View 2 Replies View Related

Debian Installation :: Grub Rescue - Will Not Boot From Mdadm RAID - No Such Disk

Sep 19, 2014

I am running a 14 disk RAID 6 on mdadm behind 2 LSI SAS2008's in JBOD mode (no HW raid) on Debian 7 in BIOS legacy mode.

Grub2 is dropping to a rescue shell complaining that "no such device" exists for "mduuid/b1c40379914e5d18dddb893b4dc5a28f".

Output from mdadm:
Code: Select all    # mdadm -D /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Wed Nov  7 17:06:02 2012
         Raid Level : raid6
         Array Size : 35160446976 (33531.62 GiB 36004.30 GB)
      Used Dev Size : 2930037248 (2794.30 GiB 3000.36 GB)
       Raid Devices : 14

[Code] ....

Output from blkid:
Code: Select all    # blkid
    /dev/md0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs"
    /dev/md/0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs"
    /dev/sdd2: UUID="b1c40379-914e-5d18-dddb-893b4dc5a28f" UUID_SUB="09a00673-c9c1-dc15-b792-f0226016a8a6" LABEL="media:0" TYPE="linux_raid_member"

[Code] ....

The UUID for md0 is `2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` so I do not understand why grub insists on looking for `b1c40379914e5d18dddb893b4dc5a28f`.

**Here is the output from `bootinfoscript` 0.61. This contains alot of detailed information, and I couldn't find anything wrong with any of it: [URL] .....

During the grub rescue an `ls` shows the member disks and also shows `(md/0)` but if I try an `ls (md/0)` I get an unknown disk error. Trying an `ls` on any member device results in unknown filesystem. The filesystem on the md0 is XFS, and I assume the unknown filesystem is normal if its trying to read an individual disk instead of md0.

I have come close to losing my mind over this, I've tried uninstalling and reinstalling grub numerous times, `update-initramfs -u -k all` numerous times, `update-grub` numerous times, `grub-install` numerous times to all member disks without error, etc.

I even tried manually editing `grub.cfg` to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `(md/0)` and then re-install grub, but the exact same error of no such device mduuid/b1c40379914e5d18dddb893b4dc5a28f still happened.

[URL] ....

One thing I noticed is it is only showing half the disks. I am not sure if this matters or is important or not, but one theory would be because there are two LSI cards physically in the machine.

This last screenshot was shown after I specifically altered grub.cfg to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `mduuid/2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` and then re-ran grub-install on all member drives. Where it is getting this old b1c* address I have no clue.

I even tried installing a SATA drive on /dev/sda, outside of the array, and installing grub on it and booting from it. Still, same identical error.

View 14 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved