Ubuntu Installation :: Grub Multi Boot With Raid 0
Mar 20, 2011
There have been many postings on doing Raid 0 setups, and it seems the best way looks like softRaid, but there were some arguments for fakeRaid in dual boot situations. I've seen some posts on dual boot windows/linux in Raid 0, but I was hoping to do a multi-boot using a grub partition, with several Linux distros and Windows 7. There will also be a storage disk for data, but not in the array. From what I gather, I'll need a grub partition which can only reside on one of the two disks, one swap partition on each disk, then the rest I can stripe.
I've got two 73GB WD raptor drives to use for the OS's and programs. I'm just getting my feet wet with the terminal in linux (Ubuntu makes it way too easy to stay in GUI), and the inner workings of the OS, so I have several questions:
Is this going to be worth the effort? Obviously I'm trying to boost performance in boot and run times, but with Grub on a single drive, will I see much gain?
Does this sound like the right methodology (softRAID)? I only have two spare PCI slot's, which don't seem like they would be condusive to hardware raid, but someone who knows more could convince me otherwise.
I just netinstalled Squeeze to a netbook with Windows7. The installation went well without any problem. Linux is also working OK. When I boot now, grub does not show Windows7. I took default settings during installation. I mean I did not do anything special. What should i do to fix it? Should I run osprober and grub-update?
I have a single hdd, on which I do not require windows OS, just (multiple) linux; it is just a dev mule, exploratory... Have read the saikee methods, and much more... almost there Initial installs were with mint linux 4, just used ml6
partitioned with parted magic partition table: Device Boot Start End Blocks Id System /dev/hda1 * 1 64 514048+ 6 FAT16 /dev/hda2 65 2614 20482875 83 Linux
I've got a machine that I'd got 9.10 on, that I've now upgraded to Lucid Lynx - and I'm having the same problem with dual boot (or lack thereof) that I was having previously.
Rough scenario is:
(Original Vista machine had)
C: Windows Vista OS + Windows software, etc.: 500GB - single NTFS partition - SATA drive
D: General dumping ground for data. 500GB SATA drive. Was single NTFS partition, now shrunk to install Ubuntu.
So is now: - NTFS partition (containing general rubbish) - Ubuntu / partition - Ubuntu swap partition
... and then 3 x 1TB SATA drives making up an (Intel ICH9R) FakeRaid RAID5 array - that Windows can happily 'see' and use, but I don't care about Ubuntu having access to it or even seeing it.
Lucid Lynx is installed to /dev/sde6 (IIRC) - but when I boot the machine just boots straight into Vista.
I've done what I can to try and get GRUB correctly installed - to the point that right now I probably have it splattered just about anywhere and everywhere.
So - now - the machine boots and simply presents me with "GRUB Hard Disk Error" and stops...
I can fix this by running the Vista repair, with a fixmbr etc. and putting the MBR back to 'normal' on the first boot disk (/dev/sdd in this case). The machine then just boots straight into Vista.
...or I can boot into Ubuntu (or Vista) by booting off a Super Grub Disk (CD) and selecting "Boot Linux" (or whatever it is) - and it correctly boots Lucid Lynx from /dev/sde6
Ideally I want a proper GRUB dual boot menu - but I just seem to be getting into more and more of a mess!
I wrote a GRUB multi-boot configuration so I can boot multiple distributions and have storage space on one 32GB flash drive.
set imgdevpath="/dev/disk/by-label/multiboot"
Code: Select allmenuentry 'Debian Jessie amd64' { set isofile='/iso/debian-8.0.0-amd64-DVD-1.iso' loopback loop $isofile linux (loop)/install.amd/vmlinuz initrd (loop)/install.amd/initrd.gz }
This works in virt-manager when I boot the physical usb device a virtual disk with a usb bus and it works flawlessly, but when I plug it into a physical machine the cdrom detects fails to mount /dev/sdb1 as fstype=iso9660.
I'm a long time windows user and it-tech but I have long felt that my geek-levels were too low so I installed Ubuntu last week (9.10 x64). Hopefully I can make it my primary OS. I have two 80GB drives in RAID-1 from my nforce raid controller, nforce 570 chipset. Then a 320 GB drive where I placed ubuntu and it's also where grub placed itself. And also a 1TB drive.
When grub tries to boot XP I get the error message: "error: invalid signature" I checked the forum as much as I could and tried a few things, but no change.
Drives sdc and sdd are the two drives in raid, they are matched exactly, but detected as different here. I really think they should be seen as one drive.
how I can make grub work as it should?
Also, if/when I need to make changes to grub, do I really have to use the live CD?
Code: ============================= Boot Info Summary: ============================== => Grub 1.97 is installed in the MBR of /dev/sda and looks on the same drive in partition #1 for /boot/grub. => Windows is installed in the MBR of /dev/sdb => Windows is installed in the MBR of /dev/sdc
I am trying to install Ubuntu Server 10.10 on a computer with 5x 1.5 TB HDDs. I went through the process of partitioning the five hard drives into three partitions each:
sd*0 is a 300MB partition for /boot, RAID1, 2 active, 3 spare sd*1 is a 500MB partition for swap, RAID1, 2 active, 3 spare sd*2 is a 1.5TB partition for /, RAID5+LVM, 5 active, 0 spare. md0 is the raid1 on sd*0 md1 is the raid1 on sd*1 md2 is the raid2 on sd*2
During the install, everything seemed to work fine with the formatter, but the installation ended in error. I booted into rescue mode and found that though all the drives were U (in /proc/mdstat), it was resyncing. I let this run (overnight) and the next day, jumped back in and installed the OS successfully.
However, after installing GRUB, when the installation process asked me to reboot, the system came back up with a blank screen (blank, save for a blinking cursor) and didn't move from there. I am thinking that the problem is GRUB, since I can boot into the main LVM partition via the rescue option on the install cd. Here's what bugs me:
I'm having serious troubles to install ubuntu-10.04.1. My raid is an hardware raid with intel chipset. Note that win7 is already installed and working with my raid. I made some space from windows, to install Ubuntu (40gb). First, I run the installer, everything seems to be fine. I choose to install Ubuntu were there is the most space free (sorry, I'm not sure about the real terms used there).
Then my partition with the vista loader appears. So the installer can see my raid, and should work fine (everything is detected correctly). But once I'm in the end of the installation (around 95%), a pop-up appears, and tells me that Grub can't install in /dev/sda and it's a fatal error. I can choose an another destination, but it doesn't seems to work.
I had ubu 904 and vista installed on an 80gb drive, i had a spare 80gb drive also. I setup a raid0 config in my bios, then installed ubu9.10 onto it. All was fine until the very end, and then it said grub failed to install.
So i rebooted, and im left with a blinking cursor. How do i install grub? Ive installed ubu a few times now and never had an issue so now im lost.
I am running a 14 disk RAID 6 on mdadm behind 2 LSI SAS2008's in JBOD mode (no HW raid) on Debian 7 in BIOS legacy mode.
Grub2 is dropping to a rescue shell complaining that "no such device" exists for "mduuid/b1c40379914e5d18dddb893b4dc5a28f".
Output from mdadm: Code: Select all # mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Nov 7 17:06:02 2012 Raid Level : raid6 Array Size : 35160446976 (33531.62 GiB 36004.30 GB) Used Dev Size : 2930037248 (2794.30 GiB 3000.36 GB) Raid Devices : 14
[Code] ....
Output from blkid: Code: Select all # blkid /dev/md0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs" /dev/md/0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs" /dev/sdd2: UUID="b1c40379-914e-5d18-dddb-893b4dc5a28f" UUID_SUB="09a00673-c9c1-dc15-b792-f0226016a8a6" LABEL="media:0" TYPE="linux_raid_member"
[Code] ....
The UUID for md0 is `2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` so I do not understand why grub insists on looking for `b1c40379914e5d18dddb893b4dc5a28f`.
**Here is the output from `bootinfoscript` 0.61. This contains alot of detailed information, and I couldn't find anything wrong with any of it: [URL] .....
During the grub rescue an `ls` shows the member disks and also shows `(md/0)` but if I try an `ls (md/0)` I get an unknown disk error. Trying an `ls` on any member device results in unknown filesystem. The filesystem on the md0 is XFS, and I assume the unknown filesystem is normal if its trying to read an individual disk instead of md0.
I have come close to losing my mind over this, I've tried uninstalling and reinstalling grub numerous times, `update-initramfs -u -k all` numerous times, `update-grub` numerous times, `grub-install` numerous times to all member disks without error, etc.
I even tried manually editing `grub.cfg` to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `(md/0)` and then re-install grub, but the exact same error of no such device mduuid/b1c40379914e5d18dddb893b4dc5a28f still happened.
[URL] ....
One thing I noticed is it is only showing half the disks. I am not sure if this matters or is important or not, but one theory would be because there are two LSI cards physically in the machine.
This last screenshot was shown after I specifically altered grub.cfg to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `mduuid/2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` and then re-ran grub-install on all member drives. Where it is getting this old b1c* address I have no clue.
I even tried installing a SATA drive on /dev/sda, outside of the array, and installing grub on it and booting from it. Still, same identical error.
I've recently had trouble reinstalling my Ubuntu system as I was getting various unusual errors as described in my old thread here. I thought it was probably something to do with my RAID-0 array which was pre-installed on my laptop from purchase being corrupted or something like that (if it's possible). I decided to simplify things for myself (not understanding RAID arrays much) so I just removed the RAID array and installed Windows and Ubuntu on the now separate hard disks. It worked fine.
I noticed quite a significant performance drop, however, with even Ubuntu boots taking longer than 30 seconds despite my laptop being both high-spec and only a few months old. Windows, as you can imagine, was dreadfully slow. I wasn't entirely convinced that this was entirely due to the loss of the RAID array - as even low-spec laptops with presumably no RAID arrays are supposed to boot Ubuntu in under 30 seconds apparently - but I read that RAID-0 arra
Problem: I have installed two Ubuntu servers, 10.04 32-bit and 10.10 64-bit, in a multi-boot environment (also have FDOS and WinXPsp3). The 64-bit will not boot because grub can't find the UUID for the disk with the 64-bit system.
Brief Background: Installed 10.04 LTS two months ago with no problems. 10.04 is in a primary partition on hda with FDOS.
Installed 10.10 (64-bit) in a new primary partition on the same hd. The install seemed to go ok, but the MBR and the fs on the 10.04 were corrupted; could not boot. Restored drive, and rebuilt grub.
Installed 10.10 on separate hd (hdb). In grub step all OS's were recognized so I pointed the grub to hda. Grub failed to boot.
Rebuilt grub from 10.04 on hda. All systems recognized but 10.10 will not boot because it says it cannot locate the UUID specified.
Compared the grub.cfg for both systems, the UUID specified for hdb is the same. Also, when I mount the drive for 10.10 on the 10.04 system the drive UUID is consistent.
I know I must be missing some thing, but I know not what. Have searched and can't find any clues. All other OS's boot ok.
Hardware: AMD64 4GB, 2 internal IDE drives (hda and hdb), 1 internal SATA (hdc WinXP), various USB and Firewire Drives (no bootable systems).
I have Lenny, and Jaunty Jackaope installed on the same hdd. Jaunty Jackaope was installed 2nd so it has control of grub (I don't know if that is the correct expression) I want to remove Jaunty Jackalope however I know from past experience that after I do this I will no longer be able to boot into Lenny as I will get a grub error at startup. How to I give boot/grub to Lenny so that I can remove the other operating system?
There are lots of OSs and Linux dists to install on your netbook, and I want to make it as easy as possible to install, remove and switch between them.
Just installing a dist and then another one after it will replace the GRUB boot screen every time, and some dists might override previous GRUB menus entirely.
On a previous machine I created a GRUB partition which chain-loads GRUB for each dist, but now I can't remember how I did it.
The hard drive is currently empty, since I started playing around with repartitioning. What is the easiest way to install GRUB to a partition? Links are welcome, but please no generic "install GRUB" guides because the ones I've found haven't been relevant to my particular situation (empty hard drive, multi boot environment, no CD/floppy)..
I have a dual boot computer. I seem to be having a bit of a problem with Grub lately. Every time I do any kind of update in Debian Lenny AMD 64 it borks my Grub. The first time I had to change a few lines in menu.lst (hd1,1 to hd0,1), no problem. The second time, things went downhill fast and G-Parted was giving me errors on my NTFS partitions. I had to do an XP repair, fixboot, and I had to reinstall Grub completely to the MBR.
Now, I having a "Grub Loading stage1.5. Grub loading, please wait.... Error 22". All I did was update Debian Linux and shut down. From my initial searches this is an error relating to not finding the correct partition. I have booted with a G-Part CD and it shows all my partitions. I do have a Windows XP Home boot cd if I need it. Here is my partition outline if you need it:
/dev/sda1 NTFS (Windows) flags--boot /dev/sda2 Ext3 (Linux) /dev/sda3 FAT32 (shared space between Windows and Linux) /dev/sda4 extended /dev/sda5 linux-swap
I have installed Ubuntu on my m1530 since 8.04 and currently dual boot Win7 and 10.10. I would like to dual boot on my PC, but I have run into a problem. I am not a pro at Ubuntu, but this problem I can not solve by reading forums like I have in the past.
I realize this is a common problem, but I have noticed people having success.
I have a M4A87TD EVO MB with two Seagate drives in Raid 0. (The raid controller is a SB850 on that MB) I use the raid utility to create the raid drive that Windows7x64 uses. I have 2 partitions and 1 unused space. Partition 1 is Windows, partition 2 is for media, and the remaining unused space is for Ubuntu.
I am running ubuntu-10.10-desktop-amd64 off a Cruzer 16GB flash drive that was installed via Universal-USB-Installer-1.8.1.4.
My problem like so many others is that when I load into Ubuntu, gparted detects two separate hard drives instead of the raid. I read that this is because kpartx is not installed on 10.10. I then went in LiveCD mode and downloaded kpartx from Synaptic Manager. Gparted still reported two drives. I opened terminal and run a few commands with kpartx. I received an error. (Forgive me I didn't write it down, but I believe it said something about a communication error. I will try again later and see.)
Currently I am reflashing the Cruzer with a persistence of 4GB. I am not familiar with this process, but I understand that my LiveCD boot will save information I download to it. I decided to try this method because I was going to install kpartx and reboot to see if this made a difference.
I am looking for any suggestions on a different method or perhaps someone to tell me that the raid controller or some hardware isn't supported. I did install ubuntu-10.10-alternate-amd64 on my flash drive, but fail to get past detecting my CD-ROM drive since it's not plugged in. If this method is viable, I will plug it in. I also watched the ..... video were a guy creates Raid 0 with the alternated CD, but it wasn't a dual boot and didn't use a raid controller from a MB.
I wanted windows to appear first in my grub2 menu so I renamed the 30_os_probe file(or whatever it felename is) to 09_os_probe so that it comes before the 10 linux file, problem is whenever these files get updated the updater is unable to find the 30_os_probe file since I renamed it and recreates it... leaving me with two versions (09 and 30) with 09 being of course outdated.
The updater also fails to run update-grub and instead attempts to update grub.cfg manually... and fails. I had to manually do a sudo update-grub.
Is there any way to fix this so its all updated automatically while leaving windows the top choice? No manual intervention required beyond clicking "install updates"?
Also, is it possible to JUST have the Windows and Ubuntu choices, no Ubuntu recovery, memtest, alternative(older) kernels for Ubuntu, etc in the grub menu?
I have two hard drives windows 7 is on one of them and Ubuntu 9.10 is on the other. Both drives are 320GB, but different models of drives by Seagate. Both drives are detected by the BIOS and both drives are detected by Windows, but only one drive is detected by Ubuntu during the installation process. I had to literally disconnect the Windows drive to get Ubuntu to recognize the drive I wanted it on. Now that Ubuntu is completely installed and a new Kernel has been downloaded and installed it finally recognizes both drives as existing.
There is some kind of problem with the Installer and the original Kernel that kept it from seeing the second drive. I will literally have to manually edit Grub to get it to boot the Windows drive. How do I edit Grub? and what kind of Grub command would do the trick? I searched for "multi-boot" and literally read them all, there was one thread about multi-boot on multi-drives, but it did not fit because the Installer recognized both drives with that thread. I have to change the boot order in the BIOS to get the drive to boot that I want currently.
I'm trying to install Fedora onto a computer that has Windows XP on the first of two SATA drives. Windows 7 is on the second drive.
I installed Fedora no problems on a 14 gig free space I created on the first drive and told it where and what my other OS's were. Fine so far. I didn't tell it to overwrite the MBR on the XP (first) drive. I took the second option which I "think" put the boot loader on the fedora partition.
All good - till I rebooted and I just saw my Windows 7 loader with my options for XP and Windows 7 but no Fedora.
So, if I overwrite the MBR on the first drive, will that mean I can't access my Windows 7 installation?
I have a DL380 G3 HP server and I need to attach a 1TB hard drive. Of course the server only accepts SCSI and since I don't have a few thousand pounds to splash about I am looking at alternate methods. What I have done so far is buy a PCI SATA raid card and attach the 1TB drive to it. The BIOS seemed to be playing nice detecting it and allowing me to select the card. But it won�t see the drive.
Funnily enough though when I installed Linux on the SCSI drive during the install it saw the 1TB just fine and I could write to it. This has led me to believe that I could use GRUB (installed on the SCSI) to boot the 1TB as the hardware setup is working but the BIOS just can't handle it. So here I am with no real knowledge of GRUB. I was hoping someone could tell me where to begin with this and if it is even possible
I'm running Karmic Server with GRUB2 on a Dell XPS 420. Everything was running fine until I changed 2 BIOS settings in an attempt to make my Virtual Box guests run faster. I turned on SpeedStep and Virtualization, rebooted, and I was slapped in the face with a grub error 15. I can't, in my wildest dreams, imagine how these two settings could cause a problem for GRUB, but they have. To make matters worse, I've set my server up to use Luks encrypted LVMs on soft-RAID. From what I can gather, it seems my only hope is to reinstall GRUB. So, I've tried to follow the Live CD instructions outlined in the following article (adding the necessary steps to mount my RAID volumes and LVMs). [URL]
If I try mounting the root lvm as 'dev/vg-root' on /mnt and the boot partition as 'dev/md0' on /mnt/boot, when I try to run the command $sudo grub-install --root-directory=/mnt/ /dev/md0, I get an errors: grub-setup: warn: Attempting to install GRUB to a partition instead of the MBR. This is a BAD idea. grub-setup: error: Embedding is not possible, but this is required when the root device is on a RAID array or LVM volume.
Somewhere in my troubleshooting, I also tried mounting the root lvm as 'dev/mapper/vg-root'. This results in the grub-install error: $sudo grub-install --root-directory=/mnt/ /dev/md0 Invalid device 'dev/md0'
Obviously, neither case fixes the problem. I've been searching and troubleshooting for several hours this evening, and I must have my system operational by Monday morning. That means if I don't have a solution by pretty early tomorrow morning...I'm screwed. A full rebuild will by my only option.
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
Ive created two RAID0 partitions on my drives, a 500GB and a 60GB. Im trying to install Ubuntu on the smaller partition (ive already put Win 7 on the larger one) and every time when i get right to the last part of installation it says Grub couldnt be installed. "the grub package failed to install in arget......."
I want to install Ubuntu 11.04 along my current Windows XP installation and I am trying to figure out the following:
1. How to recognize the relationship between Windows disk drive letters and the Linux disk drive indicators like /dev/sda/
2. How to configure the multi boot?
3. Where to place the Linux swap file?
4. Which Linux filesystem is the best for general use?
The specifics:
Pair of identical Seagate SATA2 80GB drives partitioned in Windows as follows:
When looking through the Ubuntu installation to be configured, I assume that drive 0 is /dev/sda where I have 3 partitions sda1, sda5, sda3. Drive 1 would be /dev/sdb with partitions sdb1, sdb5, sdb3 and I am not sure which corresponds to the Windows drive letters.
I would like to dedicate the empty drive K: for the Linux installation, swap space & data, and use minimal amount of space of drive C: for the dual boot.
I would like to create a multi boot dvd with multiple distros on it. I just got a linux mag with 5 distros on it and all of them boot!! This is by far the coolest thing i've seen. How is this done? Is there a program that will let me select more than one iso image and then create a bootable disk with all the distros I want on it and create a menu that will boot to the different distros?
I have a 500G disk and want to setup it as a test machine with many partitions.
The first partition is for Windows XP. It works. I use USB disk to boot Ubuntu, so I can use Ubuntu's command dd to backup my XP partition.
Than I set more than 10 partitions. Then Ubuntu booted from USB does not always work. It always boots, but when I open a terminal, I get funny characters.. It seems the problem is something to do with number or size of partitions.
Is there a limit for number of partitions or size of partitions?
Can some one point me in the right direction as to how to fix this.I have mint 10 gnome on /dev/sda1, then I have mint 10 kde on /dev/sda3, all working great. I have just installed ubuntu 10.10 on to /dev/sda4 all good after the first reboot (when asked to remove disc) there is a screen that shows all of my boot options (ie ubuntu 10.10 mint 10 gnome mint 10 kde) pick ubutnu do a full upgrade including new kernal reboot and at the screen it only shows ubuntu 10.10.result of boot info script below.
So, I recently installed Ubuntu on a (now) Triplebooting RAID 5 system. However, the setup was not able to install GRUB. This means I cannot boot into Ubuntu currently. The following are acceptable outcomes for me:
1) Installing GRUB as the primary bootloader, allowing me to boot into Linux, Windows 7, or Windows XP.
2) Installing GRUB as the primary bootloader, but allowing me to boot into the Windows 7 Bootloader as well as Ubuntu.
3) Installing GRUB as a secondary bootloader that can be accessed through the Windows 7 Bootloader.
My current config, according to gparted with kpartx installed is: