Debian Installation :: Possible To Boot Off Of RAID 5?

Jan 8, 2011

I am looking to build a server with 3 drives in RAID 5. I have been told that GRUB can't boot if /boot is contained on a RAID arrary. Is that correct? I am talking about a fakeraid scenario. Is there anything I need to do to make it work, or do I need a separate /boot partition which isn't on the array?

View 3 Replies


ADVERTISEMENT

Debian Installation :: No Boot From RAID Array?

Dec 22, 2010

I installed Debian 5.0.3 (Backport with .34 Kernel), because my server hardware (Dell PowerEdge R210) needs special firmware and drivers.However, the installation went quite smooth.I put the system on a RAID 1 Array with about 500 GB space.s I said the installation went well, however, it doesn't boot! No GRUB, nothing

View 4 Replies View Related

Debian Installation :: Will Not Boot After Install On Areca ARC-1110 RAID

Dec 1, 2010

I performed an install using the 5.0.6 amd64 netinst cd on a dual opteron server with an Areca ARC-1110 4-port SATA hardware RAID card. I have 2 250GB drives set up as RAID 1. The debian install saw it as only one drive, just as it should. Install went smoothly, but on reboot, the system would not load.

I did some research and tried a couple of things with no luck. Like adding a delay in the grub command. It jus sits at loading system for a while then times out and loads busy box. Just to check things out, I booted into an Ubuntu live-cd and mounted the volume. The file system is there and all of the necessary files. How to use one of these cards successfully?

View 2 Replies View Related

Debian Installation :: Grub Rescue - Will Not Boot From Mdadm RAID - No Such Disk

Sep 19, 2014

I am running a 14 disk RAID 6 on mdadm behind 2 LSI SAS2008's in JBOD mode (no HW raid) on Debian 7 in BIOS legacy mode.

Grub2 is dropping to a rescue shell complaining that "no such device" exists for "mduuid/b1c40379914e5d18dddb893b4dc5a28f".

Output from mdadm:
Code: Select all    # mdadm -D /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Wed Nov  7 17:06:02 2012
         Raid Level : raid6
         Array Size : 35160446976 (33531.62 GiB 36004.30 GB)
      Used Dev Size : 2930037248 (2794.30 GiB 3000.36 GB)
       Raid Devices : 14

[Code] ....

Output from blkid:
Code: Select all    # blkid
    /dev/md0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs"
    /dev/md/0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs"
    /dev/sdd2: UUID="b1c40379-914e-5d18-dddb-893b4dc5a28f" UUID_SUB="09a00673-c9c1-dc15-b792-f0226016a8a6" LABEL="media:0" TYPE="linux_raid_member"

[Code] ....

The UUID for md0 is `2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` so I do not understand why grub insists on looking for `b1c40379914e5d18dddb893b4dc5a28f`.

**Here is the output from `bootinfoscript` 0.61. This contains alot of detailed information, and I couldn't find anything wrong with any of it: [URL] .....

During the grub rescue an `ls` shows the member disks and also shows `(md/0)` but if I try an `ls (md/0)` I get an unknown disk error. Trying an `ls` on any member device results in unknown filesystem. The filesystem on the md0 is XFS, and I assume the unknown filesystem is normal if its trying to read an individual disk instead of md0.

I have come close to losing my mind over this, I've tried uninstalling and reinstalling grub numerous times, `update-initramfs -u -k all` numerous times, `update-grub` numerous times, `grub-install` numerous times to all member disks without error, etc.

I even tried manually editing `grub.cfg` to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `(md/0)` and then re-install grub, but the exact same error of no such device mduuid/b1c40379914e5d18dddb893b4dc5a28f still happened.

[URL] ....

One thing I noticed is it is only showing half the disks. I am not sure if this matters or is important or not, but one theory would be because there are two LSI cards physically in the machine.

This last screenshot was shown after I specifically altered grub.cfg to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `mduuid/2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` and then re-ran grub-install on all member drives. Where it is getting this old b1c* address I have no clue.

I even tried installing a SATA drive on /dev/sda, outside of the array, and installing grub on it and booting from it. Still, same identical error.

View 14 Replies View Related

Debian Installation :: Cannot Make Grub2 Boot Software RAID (Error)

Jun 2, 2011

I have a Dell PowerEdge SC1425 with two SCSI-disks, that I have tried installing Debian Squeeze on. This machine has previously been running Lenny (with grub 1), and the upgrade was done by booting a live-cd, mounting the root partition and moving everything in / to /oldroot/, then booting the netinstall (from USB), selecting expert install and setting up everything (not formatting the partition).

Both disks have identical partition tables:
/dev/sda1 7 56196 de Dell Utility
/dev/sda2 8 250 1951897+ fd  Linux raid autodetect
/dev/sda3 * 251 9726 76115970 fd  Linux raid autodetect
/dev/sda1 and /dev/sdb1 contain a Dell Utility, that I have left in place.
/dev/sda2 and /dev/sdb2 are members of a Raid-1 for swap.
/dev/sda3 and /dev/sdb3 are members of a Raid-1 for / formatted with reiserfs.

After installation, grub loads, but fails with the following message:
GRUB loading.
Welcome to GRUB!
error: no such disk.
Entering rescue mode...
grub rescue>

Doing "ls" shows:
(md0) (hd0) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1)

I can do the following to get grub to boot:
set root=(hd0,3)
set prefix=(hd0,3)/boot/grub
insmod normal
normal
This will bring me to the grub menu, and the system boots.

It appears that grub has only found md0, which I believe is the swap partition, because ls (md0)/ returns error: unknown filesystem. I have installed grub to both sda, sdb and md1, and tried dpkg-reconfigure grub-pc and dpkg-reconfigure mdadm, as well as update-grub.

I manually added (md1) /dev/md1
to /boot/grub/device.map, but still no result.

I have run the boot_info_script.sh, but unfortunately I cannot attach the RESULTS.txt, because the forum aparently does not allow the txt-extension. Instead I have placed it here: [URL]. I am tempted to go back to grub-legacy, but it seems I am quite close to getting the system working with grub2.

View 6 Replies View Related

Ubuntu Installation :: Dual Boot - Win 7 And 10.10 On Raid 0 - No Raid Detect

Nov 26, 2010

I have installed Ubuntu on my m1530 since 8.04 and currently dual boot Win7 and 10.10. I would like to dual boot on my PC, but I have run into a problem. I am not a pro at Ubuntu, but this problem I can not solve by reading forums like I have in the past.

I realize this is a common problem, but I have noticed people having success.

I have a M4A87TD EVO MB with two Seagate drives in Raid 0. (The raid controller is a SB850 on that MB) I use the raid utility to create the raid drive that Windows7x64 uses. I have 2 partitions and 1 unused space. Partition 1 is Windows, partition 2 is for media, and the remaining unused space is for Ubuntu.

I am running ubuntu-10.10-desktop-amd64 off a Cruzer 16GB flash drive that was installed via Universal-USB-Installer-1.8.1.4.

My problem like so many others is that when I load into Ubuntu, gparted detects two separate hard drives instead of the raid. I read that this is because kpartx is not installed on 10.10. I then went in LiveCD mode and downloaded kpartx from Synaptic Manager. Gparted still reported two drives. I opened terminal and run a few commands with kpartx. I received an error. (Forgive me I didn't write it down, but I believe it said something about a communication error. I will try again later and see.)

Currently I am reflashing the Cruzer with a persistence of 4GB. I am not familiar with this process, but I understand that my LiveCD boot will save information I download to it. I decided to try this method because I was going to install kpartx and reboot to see if this made a difference.

I am looking for any suggestions on a different method or perhaps someone to tell me that the raid controller or some hardware isn't supported. I did install ubuntu-10.10-alternate-amd64 on my flash drive, but fail to get past detecting my CD-ROM drive since it's not plugged in. If this method is viable, I will plug it in. I also watched the ..... video were a guy creates Raid 0 with the alternated CD, but it wasn't a dual boot and didn't use a raid controller from a MB.

View 6 Replies View Related

Ubuntu Installation :: Dual Boot SSD Non Raid - 1 Terabyte Raid 1 Storage "No Block Device Found"?

Sep 15, 2010

It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.

I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)

I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.

View 4 Replies View Related

Debian :: /boot On RAID 1 With Squeeze Can Not Boot From 2nd Disc

Jan 21, 2011

For several days I stuck on a problem despite my various searches on the net. I just installed a machine in RC1 Squeeze.

My Partitioning
Disc 1
/ boot RAID 1
/ swap RAID 1 and encrypted
/ RAID 1 and encrypted

[Code]....

View 5 Replies View Related

Debian Configuration :: RAID - Md0p1 Won't Automount At Boot

Mar 24, 2011

If you want, skip straight to the 'QUESTION' at the end of my post & refer to the 'EXPLANATION' later. EXPLANATION: Using Debian 6.01 Squeeze 64-bit. Just put together a brand new 3.3Ghz 6-core AMD. I had a nightmare with my Highpoint 640 raid controller, apparently because Debian Squeeze now handles raid through sysfs rather than /proc/scsi. The solution to this, of course, is to recompile the kernel with the appropriate module for /proc/scsi support. So I thought "screw that" and I've yanked out the raid card & went with Debians software raid. This allowed me to basically complete my mission. The raid is totally up and running, except for one final step... I can't get the raid to automount at boot.

My hardware setup;
- Debian is running totally on a 64Gb SSD. (sda)
- I have 3x 2Tb hard drives used for storage on a raid 1 array (sdc,sdd,sde)

[Code]....

View 2 Replies View Related

General :: Debian Software RAID 1- Boot From Both Disk

Mar 15, 2011

I newly installed debian squeeze with software raid. The way I did was, as also given in this thread.

- I have 2 HDD with 500 GB each. For each of them, I created 3 partitions (/boot, / and swap)
- I selected the hard drive and created a new partition table
- I created a new partition that was 1GB. I then specified to use the partition as a Physical Volume for RAID. and used for /boot and enabled bootable.
- Created another partition, which is of 480 GB, and then specified to use the partition as a Physical Volume for RAID. and used for /.
- Created another partion and used for swap

Then RAID configuration:
Through Configure RAID menu -> create MD device ->
(2 for the number of drives, 0 for spare devices)
Next select the partitions you want to be members of /dev/MD0. I selected /dev/sda1 and /dev/sdb1 (for /boot)
Next select the partitions you want to be members of /dev/MD1. I selected /dev/sda6 and /dev/sdb6 (for /)
And no RAID for swap partitions

'Finish Partitioning and write changes to disk' --> Finish the rest of the install like normal. Everything is ok now, except I am not sure how to test my raid config. When I pull the power of the HDD, it only boots from one disk. I read in some forum that I may have to install GRUB manually on the other. In Debian Squeeze, there is no grub command. Not sure how to make my software raid bootable from both disk. I configured /boot partitions of both disks to be boot=yes. Not sure whether that is ok.

View 2 Replies View Related

Debian :: Raid - No Booting / Reboot The System Does Not Boot?

Nov 5, 2010

There seems to be a problem with Raid on Debian. I got a new Fujitsu Primergy TS 100 S1 server, with hardware Raid (and 2 disks) installed everything nicely over the net including GRUB - but when it comes to reboot the system does not boot.

Is there anybody here who knows about the problem?

View 1 Replies View Related

Ubuntu Installation :: 9.10 64, GRUB Can't Boot XP In RAID-1

Jan 12, 2010

I'm a long time windows user and it-tech but I have long felt that my geek-levels were too low so I installed Ubuntu last week (9.10 x64). Hopefully I can make it my primary OS. I have two 80GB drives in RAID-1 from my nforce raid controller, nforce 570 chipset. Then a 320 GB drive where I placed ubuntu and it's also where grub placed itself. And also a 1TB drive.

When grub tries to boot XP I get the error message: "error: invalid signature" I checked the forum as much as I could and tried a few things, but no change.

Drives sdc and sdd are the two drives in raid, they are matched exactly, but detected as different here. I really think they should be seen as one drive.

how I can make grub work as it should?

Also, if/when I need to make changes to grub, do I really have to use the live CD?

Code:
============================= Boot Info Summary: ==============================
=> Grub 1.97 is installed in the MBR of /dev/sda and looks on the same drive
in partition #1 for /boot/grub.
=> Windows is installed in the MBR of /dev/sdb
=> Windows is installed in the MBR of /dev/sdc

[Code].....

View 2 Replies View Related

Ubuntu Installation :: RAID Array Will Not Boot (x64)?

Sep 15, 2010

I am using the 10.04.1 x64 Kubuntu live CD to install Kubuntu on my FakeRAID 0 array, I tell it not to install grub as i know it is still currently broken. the install goes flawlessly. However on first boot using my live grub CD unless i tell my computer to point to the CD it will hang (which it is told to boot from CD first so i'm not sure why it does.) When i tell it to boot to Linux, it will not boot saying the kernel is missing files (to much to sadly list, all i do not understand) then offers me a terminal to input "help" into for a list of Linux commands. Windows 7 pro x64 works just fine CD was downloaded VIA P2P if it matters

View 6 Replies View Related

Ubuntu Installation :: Dual Boot With RAID 0?

Apr 17, 2011

I have (3) 60GB SSD. I want to use (2) in RAID 0 for Ubuntu and (1) alone for Windows 7.

What is the best way to get about this? I know how to create the RAID 0 array with the (2) disks and install Ubuntu on it. But how do I get the RAID array with Linux on it to dual boot with Windows 7?

Linux - RAID 0 (2) SSDs
Windows 7 - (1) SSD

View 3 Replies View Related

Ubuntu Installation :: 11.04 On Sata Raid Won't Boot?

Aug 20, 2011

I have a Asus P5Q-E with SATA Raid (fake Raid ?) with 2 250Gb disks on Raid 0 and have Windows 7 installed. Freed some space and want to install Ubuntu 11.04 alongside Win7. Install choosing the "alongside" option. Installed...reboot...grub rescue (error no such device). Win7 disk recovery and got access to Win7 again. Where is Ubuntu ? Nowhere... the free space continues free (not partitioned) and found no evidence of ubuntu install. Ok...second try with "something else" option, created /boot /swap and / partitions and installed grub on /boot partition (this time I want to use windows boot to choose...and if it fails the instalation again, at least I got Windows). Installed...reboot...grub rescue.

Again, the free space wasn't partitioned and the installer spend some time doing something but there is no evidence of ubuntu install. Also grub seems to be installed on first disk (even when I choose /boot as device bootloader installation) - I've used EasyBCD to create an entry on win bootloader and when I choose that got grub> (not grub rescue>).

Can Ubuntu be installed on such conditions ?

View 1 Replies View Related

Fedora Installation :: F12 Won't Boot On LVM/RAID System After Upgrade

Dec 9, 2009

I upgraded from F10 to F12 using preupgrade. The upgrade itself completed with no errors, but I'm unable to boot afterward.

Symptoms:...Grub starts, the initramfs loads, and the system begins to boot. After a few seconds I get error messages for buffer i/o errors on blocks 0-3 on certain dm devices (usually dm0 and dm2, but I can't get a shell to figure out what those are). An error appears from device-mapper that it couldn't read an LVM snapshot's metadata. I get the message to press "I" for interactive startup. UDEV loads and the system tries to mount all filesystems. Errors appear stating that it couldn't mount various LVM partitions. Startup fails due to the mount failure, the system reboots, and the steps repeat.

Troubleshooting done:...I have tried to run preupgrade again (the entry is still in my grub.conf file). The upgrade environment boots, but it fails to find the LVM devices and gives me a question to name my machine just like for a fresh install. I also tried booting from the full install DVD, but I get the same effect. Suspecting that the XFS drivers weren't being included, I have run dracut to create a new initramfs, making sure the XFS module was included. I have loaded the preupgrade environment and stopped at the initial GUI splash screen to get to a shell prompt. From there I can successfully assemble the raid arrays, activate the volume group, and mount all volumes -- all my data is still intact (yay!). I've run lvdisplay to check the LVM volumes, and most (all?) appear to have different UUIDs than what was in /etc/fstab before the upgrade -- not sure if preupgrade or a new LVM package somehow changed the UUIDs. I have modified my root partition's /etc/fstab to try calling the LVM volumes by name instead of UUID, but the problem persists (I also make sure to update the initramfs as well). From the device-mapper and I/O errors above, I suspect that either RAID or LVM aren't starting up properly, especially since prior OS upgrades had problems recognizing RAID/LVM combinations (it happened so regularly that I wrote a script so I could do a mkinitrd with the proper options running under SystemRescueCD with each upgrade).

I have tried booting with combinations of the rootfstype, rdinfo, rdshell, and rdinitdebug parameters, but the error happens so early in the startup process that the messages quickly scroll by and I just end up rebooting.

System details:4 1-TB drives set up in two RAID 1 pairs. FAT32 /boot partition RAIDed on the first drive pair. Two LVM partitions -- one RAIDed on the second drive pair and one on the remainder of the first drive pair. Root and other filesystems are in LVM; most (including /) are formatted in XFS.

I've made some progress in diagnosing the issue. The failure is happening because the third RAID array (md2) isn't being assembled at startup. That array contains the second physical volume in the LVM volume group, so if it doesn't start then several mount points can't be found.

The RAID array is listed in my /etc/mdadm.conf file and identified by its UUID but the Fedora 12 installer won't detect it by default. Booting the DVD in rescue mode does allow the filesystems to be detected and mounted, but the RAID device is set to be /dev/md127 instead of /dev/md2.

The arrays are on an MSI P35 motherboard (Intel ICH9R SATA chipset) but I'm using LInux software RAID. The motherboard is configured for AHCI only. This all worked correctly in Fedora 10.

View 3 Replies View Related

Ubuntu Installation :: Dual Boot On Intel Raid 0?

Mar 4, 2010

I set up an array of 2 hhd in raid0 using intel ICH7R, and i have a win7 installed on it. I'd like to have a dual boot with karmic, too. I googled a little bit and i found that it's not so easy as i thought. Are there anyone who's experiencing my same situation?

View 1 Replies View Related

Ubuntu Installation :: Possible To Have A Dual Boot System And Have It In RAID 0 ?

Mar 21, 2010

Is it humanly possible to have a dual boot system and have it in RAID 0

I have three Hard drives.....
The three drives that I have are250GB IDE
1TB Sata
1TB Sata

And I edit . produce a lot of music/videos

What I am looking to do it have my 250Gb IDE drive as my Operating system (partitioned in two or something, one for Windows Another for Ubuntu)

And I would like when I am on either Operating System (ubuntu/windows_7)
its see's ,my computer as having a 2TB HDD (setting the two 1TB HDD's in RAID 0 )

So in Laymans... 250GB for OS's and 2TB (2 x 1TB HDD) as DATA drives

View 1 Replies View Related

Ubuntu Installation :: Windows 7 Dual Boot With Raid

Mar 24, 2010

Currently I have windows 7 x64 installed on a pair of raided ssd's using an x58 motherboard(ich10r) and I want to duel boot ubuntu x64 (either 9.10 or 10.04) on another harddrive (not part of the raided ssd's). Does anyone know if this can be done? I haven't found anything out there about this. I have tried to install Ubuntu from the CD. It gets to the install screen booting from the CD, but it doesn't let me install or try Ubuntu. I hit enter and nothing happens. I can look through all of the options but can't install or boot into Ubuntu.

View 1 Replies View Related

Ubuntu Installation :: Dual Boot Win7 And 10.0.4 With RAID 0

Jul 31, 2010

I was trying to dual boot ubuntu on my desktop which has windows 7 already installed on it with raid 0. I have 2 500GB hard drive and when I boot from cd and try to install ubuntu it only detect one of my hard drives and it said I only have 500GB and there is no OS installed on this machine! I look online but I couldn't find any solution!

View 9 Replies View Related

Ubuntu Installation :: Desperate: Upgrade From 10.4 64-bit To 10.10 - Won't Boot From Raid

Dec 2, 2010

I upgraded my Dell T3500 from lucid64 to maverick64. The system cannot boot the current image becuase it gives up waiting for root device. it gives something like:

Code: ---udevd-work[175]: '/sbin/modprobe -bv pci: v00008086d00002822sv00001028s00000293bc01sc04i00' unexpected exit with status 0x0009 ALRT! /dev/mapper/isw_bbfcjjefaj_ARRAY01 does not exist. dropping to a shell! at which point it drops to busybox at the (initramfs) prompt. I cannot boot this kernel in either normal or recovery mode (ubuntu 10.10, Kernel 2.6.35-23-generic)

however, if I select the previous kernel in grub, (2.6.32-26) it will boot fine. This is my main production machine and it hosts my Project Management software for my team.

View 9 Replies View Related

Ubuntu Installation :: Grub Multi Boot With Raid 0

Mar 20, 2011

There have been many postings on doing Raid 0 setups, and it seems the best way looks like softRaid, but there were some arguments for fakeRaid in dual boot situations. I've seen some posts on dual boot windows/linux in Raid 0, but I was hoping to do a multi-boot using a grub partition, with several Linux distros and Windows 7. There will also be a storage disk for data, but not in the array. From what I gather, I'll need a grub partition which can only reside on one of the two disks, one swap partition on each disk, then the rest I can stripe.

I've got two 73GB WD raptor drives to use for the OS's and programs. I'm just getting my feet wet with the terminal in linux (Ubuntu makes it way too easy to stay in GUI), and the inner workings of the OS, so I have several questions:

Is this going to be worth the effort? Obviously I'm trying to boost performance in boot and run times, but with Grub on a single drive, will I see much gain?

Does this sound like the right methodology (softRAID)? I only have two spare PCI slot's, which don't seem like they would be condusive to hardware raid, but someone who knows more could convince me otherwise.

[Code]...

View 5 Replies View Related

Ubuntu Installation :: 10.04->11.04:often Get Degraded Raid Error At Boot?

Sep 1, 2011

I recently upgraded from 10.04 to 11.04 and I now often get boot messages about a degraded raid.

I'm fairly experienced, but I'm confused which raid it is talking about. I have a raid5 array, but I don't boot of that, and it seems fine when I finally get it to boot. Previously, I didn't have any other raid arrays[1], but now I seem to have two others called md126 and md127, they both seem to be degraded. Where did they come from?

[1] I *do* have two 80GB drives that I was booting from in RAID1, but that was a looong time ago, and I have since only booted from one of them. The partition table indeed shows partitions 1 and 5 are raid autodetect and /proc/mdstat shows they are degraded ([U_]). Could it be that this is causing the problem? If so, why has this only started to happen since the upgrade from 10.04 to 11.04?Anyway, perhaps it is a good idea to add in that second disk to the raid1 array. If so, how to do that? Note that, I've also noticed that when I boot and get to the screen when I select from the different kernel versions, I now get a couple of really old ones too - my thought is that these are from the raid1 disk that I stopped using. If I add it to the array, how can I be sure it will mirror in the correct direction?

It could be that I have fairly recently plugged in that second RAID1 disk, after a long time of not having enough spare sata sockets (I switched my RAID5 array from 8 disks to only 3 disks, so suddenly had a lot more spare sockets).

View 9 Replies View Related

Fedora Installation :: Boot Failed: Wrong # Of Devices In RAID Set

Apr 13, 2010

I installed Fedora 12 and performed the normal updates. Now I can't reboot and get the following console error message.

ERROR: via: wrong # of devices in RAID set "via_cbcff jdief" [1/2] on /dev/sda
ERROR: removing inconsistent RAID set "via_cbcff jdief"
ERROR: no RAID set found
No root device found
Boot has failed, sleeping forever.

View 14 Replies View Related

Ubuntu Installation :: Lucid Server - Won't Boot From Software RAID

May 6, 2010

I'm trying to install Lucid server (64 bit) and I'm having trouble getting it to boot from software RAID. The hardware is an old Gateway E-4610D (1.86GHz Core2 Duo, 2GB RAM, 2 500GB HDs).

If I install on a single HD, it works fine, but if I set up RAID1 as described here, the install completes fine, but on reboot this is what I get:

Code:
mount: mounting /dev/disk/by-uuid/***uuid snipped*** on /root failed: Invalid argument
mount: mounting /dev on /root/dev failed: No such file or directory
mount: mounting /sys on /root/sys failed: No such file or directory

[Code].....

When I set up the RAID using the ubuntu installer, I created identical main and swap partitions on each of the two drives (all four are set as primary), made sure the bootable flag was on on the two main partitions, then created a RAID1 out of the two main partitions, and another RAID1 out of the two swap partitions.

View 9 Replies View Related

Ubuntu Installation :: Boot Disk Image For Install On AMD RAID PC?

Sep 3, 2010

I tried installing Ubuntu 10.04 WS on my PC but it did not see any disks to install on. I believe this is because my drives are all configured as RAID. My mobo is an Asus M3A78-EMH HDMI AM2+ socket with an Athlon 2X 5000+ CPU. The chipset is AMD 780G. I have the BIOS configured for RAID drives and I already run Win XP x32 and Win 7 x64 on it. My boot drive is configured as 'RAID READY' and I have 2 RAID 1 disks consisting of pairs of SATA drives.

From what I have researched it seems that with some tuning it should be possible to install Ubuntu 10.04 but I have little Linux experience and don't want to mess up my existing drives. I have installed Linux before a few times and run it but never with RAID. Is anyone aware of an existing disk image that I will be able to install from on my system or would it be possible for someone to create one for me to use?

View 4 Replies View Related

Ubuntu Installation :: Dual Boot 10.10 With WinXP - RAID Got Corrupted

Oct 21, 2010

I originally wanted to dual install 10.10 64 Ubuntu with Windows 7 64 RAID 1 HP PC, but RAID got corrupted and I could find no fix online except using fakeraid unsupported software. The whole thing blew my muind. I gave up after 4 or so reinstalls of windows 7 to fix RAID. So I decided to install a dual boot 10.10 Ubuntu i86 with my old WinXP i86. I kept getting these errors at boot up: unknown filesystem, file not found, out of disk, no such partion and so on. I tried just about every grub reinstall I could find, including article here: "Recovering Ubuntu after Installing Windows."

After a number of tries my friend installed Ubuntu successfully, but then when I installed windows xp I am stuck here, again. I did do the Reload grub tutorials again as well. I may just give up on WinXP and have my friend reinstall Ubuntu for me.

[Code]...

View 9 Replies View Related

Ubuntu Installation :: 10.10 Software Raid Not Auto Assembling On Boot / Fix It?

Mar 5, 2011

10.10 software raid not auto assembling on boot
I have a software raid from an 8.10 install

Ubuntu 10.10 does not auto assemble it on boot up, how do I fix this?

I didn't see a way to do it looking all over the web. But when running

mdadm --assemble --scan

from the terminal the raid starts and works fine

My mdadm.conf is in /etc/mdadm and is as follows code...

The partitions are indeed marked as linux raid autodetect

View 1 Replies View Related

Ubuntu Installation :: Blank Screen On Boot - GRUB With RAID

Mar 19, 2011

I am trying to install Ubuntu Server 10.10 on a computer with 5x 1.5 TB HDDs. I went through the process of partitioning the five hard drives into three partitions each:

sd*0 is a 300MB partition for /boot, RAID1, 2 active, 3 spare
sd*1 is a 500MB partition for swap, RAID1, 2 active, 3 spare
sd*2 is a 1.5TB partition for /, RAID5+LVM, 5 active, 0 spare.
md0 is the raid1 on sd*0
md1 is the raid1 on sd*1
md2 is the raid2 on sd*2

During the install, everything seemed to work fine with the formatter, but the installation ended in error. I booted into rescue mode and found that though all the drives were U (in /proc/mdstat), it was resyncing. I let this run (overnight) and the next day, jumped back in and installed the OS successfully.

However, after installing GRUB, when the installation process asked me to reboot, the system came back up with a blank screen (blank, save for a blinking cursor) and didn't move from there. I am thinking that the problem is GRUB, since I can boot into the main LVM partition via the rescue option on the install cd. Here's what bugs me:

[Code]....

View 9 Replies View Related

Ubuntu Installation :: Wish To Install As Dual Boot On A Nvidia RAID

Jun 2, 2011

I am currently with Wubi 10.04 under Vista and my Dell XPS 630i has a 1 TB Nvidia RAID controller.First image (Option A) suggests /dev/sda as device for boot loader installation, while second image (Option B) suggests /dev/mapper/nvidia_bcidhdja.I think that the way of keeping the RAID would be using Option B as the device for boot loader installation. Would Option A break the RAID instead?

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved