Fedora Installation :: F13 Install From CD On Software RAID Not Recognized
Sep 17, 2010
Currently running F10 on a RAID1 software raid array in Linux (not hardware or BIOS). I used the F13 install disk to boot and selected install/upgrade. Problem is that the Anaconda portion never sees the RAID device.
I tried passing the "nodmraid" argument to the kernel during boot. Other research suggested that the auto=md switch should be appended to the mdadm.conf file. No results from either solution.
Anaconda does not see the /dev/md0 or /dev/md1 and consequently it only offers an installation option.
I have got a mid-aged server which i upgraded with a simple SATA-Raid-Controller with a VT6421A chipset.I attached two Samsung 750 GB Hard disks and created directly after the POSt screen a nice Raid 1 array. Ubuntu will recognize it as well as windows (which I would never ever use... ;-)).SuSE 11.1 (we need this OS for confirmity) will simply just recognize both disks in the partitions overview. The point "RAID" remains empty.
Are there any hints out there how I can enable the whole raid stuff in open suse? Do I need to integrate other drivers / modules to get thinks working?
I have a system with: - 1 SSD "boot" drive - 3 HDDs in RAID 5. I created my RAID 5 set in BIOS - P6X58D motherboard/Intel Software RAID - Looks and works fine in Windows 7 (Ultimate) 64-bit. I installed Ubuntu 10.04 using wubi - both Windows 7 and Ubuntu systems are on the SSD THE PROBLEM Ubuntu does not appear to recognize my RAID 5 set - when it loads I see a "/dev/sd_ 10 failed" error (sd_=sda, sdb or sdc) along with "no such file or directory" - I can see my 3 HDDs in 'Disk Utility,' but the space/partition information is incorrect - I reinstalled all packages related to RAID (dmraid, etc.)
I was in the process of installing Fedora 12 when it came to the "Operating Systems List". Here it recognised only Windows and none of the other 4 Linux distros already installed. Looking at the option to "add" and then given the drop down list for each partition, can someone tell me what to enter in the LABEL box for these partitions, or how to find what to enter in these boxes to enable these distros to be booted?
I booted from fedora 12 cd, My problem is the install does not recognize my ide disk.
lspci -> ide interface vt82c586a fdisk -l /dev/sda1 swap -approx 2g /dev/sda2 linux 20 g /dev/sda3 linux 54 g lshal -> pata_via
I tested the disk with seagate diag it reports no errors. I used Partion manager and created three partions & formatted them. Other distros see the disk , I am trying crunch bang it installed with no fuss. I have googled & looked in known issues pages.
I am doing a new install and have 4 drives (2x500Gb and 2x2Tb). What I want is the OS on the 2x500Gb and the data on the 2Tb drives. The idea is to make the 500's one RAID 1 set and the 2Tb a RAID 1 set. I think the installer is trying to build the RAID set for the OS but the root is looking like a RAID 0 rather than 1. Is there some way to specify?
I just bought two 320GB SATA drives and would like to install F11 with software RAID 1 on them. I read an article which explains how to install RAID 1, but it used 3 disks: one for OS and two clones. Do I really need a third disk to install RAID 1 configuration? If 2 disks is enough, then should I select "Clone a drive to create a RAID device" during F11 installation as explained here?
Installing new Intel server S5000VSA (Sapello) motherboard with RAID 1. Downloaded F10 32 bit DVD and run install. Everything works fine and select Samba and all that. Install completes but on reboot get a blank screen.
I know with Windows one has to load the RAID driver off the Intel driver CD, so guess that is the problem. But how do I do this in Fedora? Question: How do I get to select the Intel RAID driver in the install process? There does not seem to be any place to stop and make this selection. I tried selecting the Red Hat and Suze installs on the Intel configuration assistant but it then reboots and that is it. In Windows it would ask to insert the Intel driver CD but that does not happen. So I am stuck. I loaded Fedora some 3 year ago and like it but then it was not on a RAID setup. I created a VMWare install on my desktop and did the same install options and it works. So the DVD seems fine.
I'm attempting to install F13 on a server that has a 2-disk RAID setup in the BIOS. When I get to the screen where I select what drive to install on, there are no drives listed. The hard drives were completely formatted before starting the 13 installation. Do I need to put something on them before Fedora will install?
Fedora is having trouble identifying a raid partition, it sees them as separate drives. I got drivers from dell, but during a Fedora dvd install, it mentions nothing about a place to install extra drivers.
When it says it must "initialize" the drives, Fedora then breaks the dell bios raid. How to either install the dell drivers or make Fedora see the raid partition as one?
I'm having a problem with the installation of Ubuntu Server 10.04 64bit on my IBM xSeries 346 Type 8840. During the installation the system won't recognize any disk, so it's asking which driver it should use for the RAID controller. There's a list of options, but nothing seems to work. I've been searching the IBM website for an appropriate driver, but there is no Ubuntu version (there is Red Hat, SUSE, etc). I was thinking about downloading the correct driver onto a floppy disk to finalize the installation, but apparently no 'general' Linux driver to solve the problem here.
Motherboard:Asus Crosshair Formula IV CPU:AMD Phenom� II X6 1090T Black Edition 3.2GHz RAM:Corsair Vengeance 8GB DDR3 1600MHz CL8 Dual Channel Kit (2 x 4GB) HDD: 1 Sata 750 GB Hard Drive HighPoint 3120 PCI-E TrueRaid Controller, 2x250GB set as Mirroring.
So everything is working good. I have windows on the 750 GB Sata hard drive, and OpenSuse 11.3 on the Mirrored raid setup. I have grub as the bootloader, it loads windows and linux without error. So i have 8 gb of ram installed, but i have room for another 8 gb. This is where the error starts. After i install the other 8 gb of ram, which is corsair vengeance as well, windows will boot, but not linux. When i remove the newly installed ram, linux boots no problem. I tried putting the ram back in, then reinstalling linux again, but as i go through the setup, linux doesn't recognize the RAID setup. Like i can see it there, but i can't use it, partitions don't show up.Now i remove the install cd and tried to start linux again, still error. The error it gives me goes like this:
>Trying manual resume from /dev/disk/by-id<...>-part2 >resume device /dev/disk/by-id/<...>-part2 not found (ignoring) >Waiting for device /dev/disk/by-id/<...>-part1 to appear: ............. >Could not find /dev/disk/by-id/<...>-part1 >Want me to fall back to /dev/disk/by-id/<...>-part1? (Y/n)
Wether i select Y or N, it takes me to a shell. I tried using fdisk in the shell to check if /dev/sda the linux setup has any partitions on it, and no it does not recognize any of it. When i use fdisk on /dev/sdb to list the partitions it works. So only the raid setup gets affected by the increase in RAM.
We ran out of space on our server hard drive, so I installed 2 x 1GB drives, set them up as a software RAID1 array, copied the contents of /home to it, mounted it as /home for testing. Everything OK, so I unmounted it, deleted the contents of the /home folders (don't worry, we're backed up), then remounted the array. Everything was fine until we rebooted. Now I can't access the array at all; during booting the error "mount: special device /dev/md1 does not exist" comes up twice, and manually trying toe same issue. The relevant line from fstab reads:
/dev/md1 /home ext3 defaults 0 0
However, using webmin shows only md0, the RAID0 device on which the OSD was originally installed. There is no /dev/md1 device file. The mdadm.conf file reads as follows:
# mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid0 num-devices=2 uuid=76fd4050:fb820568:c9bd3a59:ad3e70b0
So it's not listed; I'm assuming this is significant. Am I right, and whether I am or not, what can I do?
If I have a windows installed in raid-0, then install virtualbox and install all my linux os,s to virtualbox will they be a raid-0 install without needing to install raid drivers?
I've just installed 10.04 twice on 2 separate disks and it seems to have done something really strange to them. BIOS will not recognise them anymore, it just waits but they never come online. I have to unplug the disk completely for the system to boot.The first time I thought I'd just lost a disk, but when exactly the same thing occurred the second time around, it seems like too much of a coincidence.The installer didn't recognise the disks first time around as they were part of a RAID group previously. I did a dmraid -E -r /dev/sda to fix. After that just installed as per every other time I've used.
System - Xubuntu 8.04.4 but equally valid for other Ubuntu variants.I installed an updated driver for an Intel network card by doing a "make install" from its src directory (copied from install CD). That appears to have successfully created a file named "e1000.ko" and placed it in a subdirectory named "e1000" in the appropriate /lib/modules/.../drivers/net directory. Based on the contents of the Makefile the "make" also invoked "depmod -a". The "make" doesn't appear to have done anything else interesting.
However, on reboot this new module is not the one being loaded (based off of strings in the dmesg log file), instead the original as distributed with Xubuntu is.Obviously, I have to do more than the installation instructions on Intel's CD say to do.What steps do I still need to do to get this new driver recognized in place of the original?(it is no huge deal if the only answer I get is an honest "Don't bother" since the driver that came with Xubuntu appears to work "fine", however the Intel driver is a full Version number higher so I'd expect it to be somewhat "better" by some measure)
Installing Ubuntu 10.10 desktop.on a Highpoint rocketraid 2642.Installing Ubuntu, it does not find the drive?How do I install the drivers to install and boot after the installation from the raid drives?
My computer has 2 HDDs attached to 1st and 2nd SATA ports of my mobo respectively, the 1st SATA drive is empty while the 2nd have my Windows Vista on it. I also have a Perc/5i RAID card with 2 RAID arrays defined.
I am going to install Ubuntu 10.04 x64 to the 1st SATA driver (I expect it will be /dev/sda), but when I try to install, I found my drives are recognized as below,
/dev/sda > 1st RAID array of my Perc/5i /dev/sdb > 2nd RAID array of my Perc/5i /dev/sdc > 1st SATA drive < I need to install ubuntu on this drive /dev/sdd > 2nd SATA drive
I don't want to install 10.04 as /dev/sdc because I may add more arrays to my raid card which from my experience of 9.04, it will probably change the drive letter of my current /dev/sdc and then system will fail to boot.
Is there any way to force my 1st SATA HDD as /dev/sda during install ?
First of all I'm not a very experienced linux user, I have installed fedora 13 on my HP Laptop and it has 8Gb installed, (confirmed in Bios) grep MemTotal /proc/meminfo
MemTotal: 3088492 kB It's less then 8Gb
googling about it it seems that I ineed PEA support for my kernel. It seems that fedora 13 should install this if needed, correct me if I'm wrong. uname -a Linux mjohanss-pc.kentor.se 2.6.33.6-147.2.4.fc13.i686 #1 SMP Fri Jul 23 17:27:40 UTC 2010 i686 i686 i386 GNU/Linux If PEA support was installed I think PEA should be seen after .fc13.i686 in the output. How can I install and configure PEA support so all my memory is recognized?
I have an old HP PC with 2 drives: Primary (C = 20GB) and a slave (E = 60GB). I have Windows XP Pro OS (which I want to completely replace with Ubuntu). Ubuntu 10.10 is installed on E as a side-by-side (with XP on C). I am done testing Ubuntu and now want to completely replace the XP OS.Ubuntu is installed on E-drive as a partition. ISSUE: When I log on the PC goes directly to the GRUB menu but I get no option to boot from the Live Disk 10.10 during the boot-up.
HISTORY: I have tried (unsuccessfully) to remove Ubuntu from my E-drive by use of the uninstall function from Windows control panel. I have also tried to remove it using the manage/Disk Management process but the "Format" and "Delete" options are unavailable (grayed out) so cannot use that. I would like to do a complete clean up and fresh install of Ubuntu as my only OS.I have read and tried a number of internet articles / recommendations about opening BIOS and redirecting the start-up to the disk, but I do not get any option or any time during the boot to do that.
QUESTIONS: 1) How can I get my HP PC to boot from (recognize) the Ubuntu Live Disk (CD)?
2) Would a complete removal and clean reinstallation be a better approach?
3) And how can I remove Ubuntu from the partition on E (as I want to dedicate the C-drive exclusively for Ubuntu)?
This is my first post so please be patient. I am unfamiliar with this part of the installation process.
I'm new to fedora, and I had an issue when I installed Fedora 11. I remembered how to make a user and password in terminal, and I had to look up the command to start GUI, but the two I found, gdm and startx, were not recognised. How do I start the GUI?
I was installing Fedora 13 on my computer as I need it for a class I'm taking but wanted to keep Ubuntu (10.04) as well. So now the setup is: Windows 7 on sda, Ubuntu and Fedora on sdb.I did not install the bootloader when Fedora 13 prompted to. Running sudo os-prober and sudo update-grub only finds Windows 7.I know I can manually add it in 40_custom but have no idea how to go ab out that. The output of my fdisk -l is as follows:
I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
I installed fedora on one of my three hard drives and it seems that something went wrong with the other two. Fedora recognizes the other drives but it seems the data on those drives is not there anymore. The below images are screen shots from my computer folder and disk utility program. I tried to look through Fedora documentation but I did not find a solution to my case.
I currently have windows 7 installed. There are two partitions. The OS is on one and the other one is used for data. Is there anyway this could be a problem?
The only readable error I have gotten is from suse. Said something about an unreadable file system. All of the others have just hung at some point during the live cd boot.
I have now tried live cds of the following and none have worked.
Its a stock alienware that I bought 3 years ago. The only change I have made is updating the ram, which I bought directly from dell to make sure it was compatible. The alienware is an intel core 2 duo, 4 gigs of ram, 160gb HD, dual sli nvidia graphics cards. not sure about the mother board, could figure it out if its necessary. Is there any reason the hardware wouldn't be supported?
I have created a system using four 2Tb hdd. Three are members of a soft-raid mirrored (RAID1) with a hot spare and the fourth hdd is a lvm hard drive separate from the RAID setup. All hdd are gpt partitioned.
The RAID is setup as /dev/md0 for mirrored /boot partations (non-lvm) and the /dev/md1 is lvm with various logical volumes within for swap space, root, home, etc.
When grub installs, it says it installed to /dev/sda but it will not reboot and complains that "No boot loader . . ."
I have used the supergrubdisk image to get the machine started and it finds the kernel but "grub-install /dev/sda" reports success and yet, computer will not start with "No boot loader . . ." (Currently, because it is running, I cannot restart to get the complete complaint phrase as md1 is syncing. Thought I'd let it finish the sync operation while I search for answers.)
I have installed and re-installed several times trying various settings. My question has become, when setting up gpt and reserving the first gigabyte for grub, users cannot set the boot flag for the partition. As I have tried gparted and well as the normal Debian partitioner, both will NOT let you set the "boot flag" to that partition. So, as a novice (to Debian) I am assuming that "boot flag" does not matter.
Other readings indicate that yes, you do not need a "boot flag" partition. "Boot flag" is only for a Windows partition. This is a Debian only server, no windows OS.
I already have a 300 GB SATA drive with Ubuntu 8.04 installed on it. It is currently running off my mobo's onboard SATA 1.0 Raid Controller. I recently purchased a SATA 2.0 Raid PCI controller that I will be putting in the computer and 2 new 750 GB Western Digital Caviar Green Hard drives. I wish to add the two drives in a Raid 1 configuration to store all my Pictures, Files, and Movies on. Every instruction and tutorial I can find on setting up Raid on Linux assumes you are performing a fresh install on Linux and gives no tips or instructions for current installations. I already have Ubuntu installed and do not wish to have to reinstall it. I want to leave my installation on the 300 GB drive and just add in the 2 750GB drives for storage.
I have an old Fedora machine setup to use Raid-1. I'm trying to install Ubuntu 10.04 Desktop on it, but the installer can't seem to override the raid partition. I have two 80GB drives, but the "Prepare disk space" screen only shows one 80GB partition called "/dev/mapper/isw_dfafhagdgg_RAID_Volume01". When I selected "Erase and use the entire disk", it gives me the error "The ext4 file system creation in partition #1 of Serial ATA RADI isw_dfafhagdgg_RAID_Volume0 (mirror) failed".
So I tried going back and specifying partitions manually. However, it only shows me the one device /dev/mapper/isw_dfafhagdgg_RAID_Volume01, and I can't delete it to get my original two hard drives, so I can recreate a RAID-1 setup. What am I doing wrong? I thought Ubuntu supported software RAID-1?
I have a system with RAid 0 with Windows 7. I want to load Ubuntu on a stand alone disk. I have two disks obviously for my raid o, two storage disks and an extra disk for ubuntu. I have tired to install unbuntu but upon start up I get an error on the disk. I than had to use grub to repair my mbr. My question is how do I make this work where I can have a dual boot system. I have been running ubuntu on my laptop .