Debian :: Installation Broke Raid Setup?
May 27, 2011
My system includes two 120GB disks in fake raid-0 setup. Windows vista is installed on these.For Debian I bought a new 1 TB disk. My mission was to test Debian and I installed it to the new disk. The idea was to remove the disk afterwards and use windows as it was before. Everything went fine. Debian worked perfectly but when I removed the 1 TB disk from system grub will show up in boot in grub recovery mode.
Is my RAID setup now corrupted? Grub seems to be installed on the other raid disk? Did grub overwrite some raid metadata? Is there any way to recover the raid setup?
dmraid -ay:
/dev/sdc: "pdc" and "nvidia" formats discovered (using nvidia)!
ERROR: nvidia: wrong # of devices in RAID set "nvidia_ccbdchaf" [1/2] on /dev/sdc
ERROR: pdc: wrong # of devices in RAID set "pdc_caahedefdd" [1/2] on /dev/sda
ERROR: removing inconsistent RAID set "pdc_caahedefdd"
RAID set "nvidia_ccbdchaf" already active
ERROR: adding /dev/mapper/nvidia_ccbdchaf to RAID set
RAID set "nvidia_ccbdchaf1" already active
View 1 Replies
ADVERTISEMENT
May 3, 2011
I have a dell with win7 installed on two disks as raid0. I added a 3rd HD and installed 11.04 on to it. This boots ok but only Linux shows in the grub menu. Win7 is not there and I have no way to boot it as the installer put grub on disk0 of the raid overwriting the windows boot prog.
View 9 Replies
View Related
Jul 4, 2010
I recently updated the kernel to the newest version(2.6.32-23-generic) using the update manager and now I am unable to boot in to my Lucid installation.
My setup is LVM on top of a RAID 0 array. My computer had been running in this configuration since Lucid was released.
The Error Code I get on Boot is to the effect of: /dev/mapper/ubuntu-os is not available.. and then I get kicked in to Busy Box.
Once in Busy Box if I try to use mdadm to mount the RAID array I get this error:
If I boot in to the live CD I can mount all of the partitions and LVM volumes, so it does not appear to be a failed drive or volume.
I have looked in the mdadm.conf, lvm config and grub config files and searched the "Google" for an answer with no-avail...
Ultimately I would like to find a solution which doesn't involve a re-installation.
View 6 Replies
View Related
May 2, 2010
Upon upgrading from Karmic to Lucid, Everything comes up, seemingly fine. Once logged in to the OS, my three monitors show the desktop wallpaper as well as all the icons on the desktop. A few programs that I have set to autostart do so.
However for some reason, I can only access one of the three monitors. (And not the one with the gnome-panel.) As soon as I drag my mouse from my third monitor onto the one next to it, I loose complete control of it, and it starts dancing all over the place. If I try to drag a window from one screen to the next (using Alt-Left CLick + drag and not moving the mouse onto the other monitor), the the app goes from one side of the working monitor to the other side (like it's wrapping around). Though I can see it on the other monitor for a little bit until I get it far enough for it to wrap.
Here is my xorg.conf:
Code:
Section "Files"
FontPath "/usr/share/fonts/X11/misc"
FontPath "/usr/share/fonts/X11/100dpi/:unscaled"
FontPath "/usr/share/fonts/X11/75dpi/:unscaled"
[code]....
View 4 Replies
View Related
Jun 8, 2011
I've been running Debian for a while, and now have the chance to do some enterprise-level experimentation. I stumbled across a very cheap EMC Clariion CX300 fibre RAID unit on eBay with 10 disks (£90!), so I snapped it up with a fibre HBA. It's not a SAN, technically a DAS in its current configuration. I'd like to know how hard it is to set up one of these as an actual disk device in Debian.I tried installing the HBA (an Agilent Tachyon XL2 PCI-X card, 32 and 64-bit modes) in my current 'live' server (a Celeron 1.2GHz-based machine), but while Debian detected it as a FC controller card, mapping the LUNs within Navisphere on the CX to the host caused nothing (obvious) to show up in the OS. From what I've read about setting up SANs (not DASs, there seems to be precious little info on setting up these), there's firmware involved with the card, and there are no references to getting a Tachyon running under Linux.
I also snagged a good deal on a PowerEdge 1850 server, but as it's the PCI-Express version, the Tachyon won't fit, so I'm considering buying a PCI-E HBA. I found a QLogic QLE2460 for a good price, but I'm hesitant to buy it, in case I still can't get Debian to see the disks. I'd most like to have the device connected to the PowerEdge instead of the Celeron machine. Have I missed a step in the configuration, or is there anything I can do to test if the system is working as it should be? The units we use at work are much simpler: set the RAIDs up, map the host LUNs and the disk shows up in Windows. I'm also sure I've set everything up correctly on the CX, as the same LUN is available to Windows. Just need to get Debian to see something on the end of the HBA!System Specs:Live Server: Celeron 1.2GHz, 512MB RAM, Intel D815EEA2 board, SATA soft-RAID hard drives, PCI Gb Ethernet and USB2.0, Agilent Tachyon XL2 PCI-X FC HBA, Debian 6.0.1PowerEdge 1850: 2x 3.2GHz Xeons, 6GB RAM, SCSI RAID disks, PCI-E riser, planned QLogic QLE2460 PCI-E HBARAID Unit: EMC Clariion CX300, 10x 146GB FC disks, 2Gbps Fibre-channel interfaceThe PowerEdge is the planned replacement for the Celeron machine, which started life as my experimentation box and became my personal live server. Storage is holding it back, so I'd love to add a further five disks to the CX and have nearly 2TB of RAID-protected storage.
View 1 Replies
View Related
May 31, 2010
I have a MSI Board that had this hard drive configuration.
200GB x Single EXT4 Ubuntu
320GB x Raid Mirror NTFS
320GB x Raid Mirror NTFS
[code]....
View 6 Replies
View Related
Jul 8, 2010
I'm attempting to install F13 on a server that has a 2-disk RAID setup in the BIOS. When I get to the screen where I select what drive to install on, there are no drives listed. The hard drives were completely formatted before starting the 13 installation. Do I need to put something on them before Fedora will install?
View 3 Replies
View Related
Dec 15, 2010
So I didn't notice when I setup my CentOS 5.5 server that I left / as RAID 0 on md1. All the rest are RAID 1. Is there a way I can modify the array to RAID 1 without a risk of data loss? I'm glad I caught this before I setup any other services. I've only setup smb so far...
[root@ftpserver ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md1 16G 3.0G 13G 20% /
[code]....
View 1 Replies
View Related
Jun 4, 2010
I've been all afternoon trying to install Ubuntu Lucid on my fakeRAID 0 configured (2) HDDs and am unable to set GRUB up. The fake RAID setup is provided by Intel Matrix Storage Manager, it is correctly enabled and the BIOS is also correctly set up -- in fact, I've managed to install Windows 7 with no significant hitch. After struggling with partioning the drives (had to follow advice I found on a very helpful guide online [0]), creating the filesystems AND getting Ubuntu's installer to actually do what it is supposed to do, I now cannot seem to set GRUB up. My system, as it stands, is unbootable at all; via live CD only.
This is how the RAID0 dev is partitioned:
Code:
# fdisk -l /dev/mapper/isw_ecdeiihbfi_Volume0
Disk /dev/mapper/isw_ecdeiihbfi_Volume0: 1000.2 GB, 1000210694144 bytes
255 heads, 63 sectors/track, 121602 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk identifier: 0x6634b2b5 .....
View 2 Replies
View Related
Apr 15, 2009
I have been battering with FC10 and software RAID for a while now, and hopefully I will have a full working system soon. Basically, I tried using the F10 live CD to setup Software RAID 1 between 1 hard drives for redundancy (I know its not hardware raid but budget is tight) with the following table;
[Code]....
I set these up by using the RAID button on the partition section of the install except swap, which I set-up using the New partition button, created 1 swap partition on each hard drive that didn't take part in RAID. Almost every time I tried to do this install, it halted due to an error with one of the raid partitions and exited the installer. I actually managed it once, in about...ooo 10-15 times of trying but I broke it. After getting very frustrated I decided to build it using just 3 partitions
[Code]....
I left the rest un-touched. This worked fine after completing the install and setting up grub, reboot in to the install. I then installed gparted and cut the drive up further to finish my table on both hard drives. I then used mdadm --create...etc to create my RAID partitions. So I now have
[Code]....
View 2 Replies
View Related
Mar 20, 2011
(This is for a 100% Clean install)
Q1) I was wondering if it is possible to Dual boot Ubuntu with Windows XP on a 1TB RAID-0 setup ?
Q2) Also, is it possible to create a SWAP partition (for Ubuntu) on a NON RAID-0 HDD ?
Q3) Lastly... I read GRUB2 is the default boot manager... should I use that, or GRUB / Lio ?
I have a total of 3 HDDs on this system:
-- 2x 500GB WDD HDDs (non-advanced format) ... RAID-0 setup
-- 1x 320GB WDD HDD (non RAID setup)
(The non RAID HDD is intended to be a SWAP drive for both XP and Ubuntu = 2 partitions)
I plan on making multiple partitions... and reserve partition space for Ubuntu (of course).
I have the latest version of the LiveCD created already.
Q4) Do I need the Alternate CD for this setup?
I plan on installing XP before Ubuntu.
This is my 1st time dual booting XP with Ubuntu.
I'm using these as my resources:
- [url]
- [url]
Q5) Anything else I should be aware of (possible issues during install)?
Q6) Lastly... is there anything like the AHCI (advanced host controller interface) like in Windows for Ubuntu?
(Since I need a special floppy during Windows Install...) I want to be able to use the Advanced Queuing capabilities of my SATA drives in Ubuntu.
View 4 Replies
View Related
Aug 11, 2009
I've been trying to setup RAID 10 with Fedora 11 on my desktop using this tutorial:[URL].. Everything has been going well, except when following the instructions and getting ready to format, it says I need to define / and boot partitions. This was never included in the instructions.
- LVM Volume Gropus
-- vg_jordan
--- fedoraRAID (type: ext4)
- RAID Devices
-- /dev/md0 (mountpoint/RAID/volume: vg_jordan ; type: LVM)
[Code]...
View 2 Replies
View Related
Jan 8, 2011
I am looking to build a server with 3 drives in RAID 5. I have been told that GRUB can't boot if /boot is contained on a RAID arrary. Is that correct? I am talking about a fakeraid scenario. Is there anything I need to do to make it work, or do I need a separate /boot partition which isn't on the array?
View 3 Replies
View Related
Apr 20, 2015
I have created a system using four 2Tb hdd. Three are members of a soft-raid mirrored (RAID1) with a hot spare and the fourth hdd is a lvm hard drive separate from the RAID setup. All hdd are gpt partitioned.
The RAID is setup as /dev/md0 for mirrored /boot partations (non-lvm) and the /dev/md1 is lvm with various logical volumes within for swap space, root, home, etc.
When grub installs, it says it installed to /dev/sda but it will not reboot and complains that "No boot loader . . ."
I have used the supergrubdisk image to get the machine started and it finds the kernel but "grub-install /dev/sda" reports success and yet, computer will not start with "No boot loader . . ." (Currently, because it is running, I cannot restart to get the complete complaint phrase as md1 is syncing. Thought I'd let it finish the sync operation while I search for answers.)
I have installed and re-installed several times trying various settings. My question has become, when setting up gpt and reserving the first gigabyte for grub, users cannot set the boot flag for the partition. As I have tried gparted and well as the normal Debian partitioner, both will NOT let you set the "boot flag" to that partition. So, as a novice (to Debian) I am assuming that "boot flag" does not matter.
Other readings indicate that yes, you do not need a "boot flag" partition. "Boot flag" is only for a Windows partition. This is a Debian only server, no windows OS.
View 5 Replies
View Related
Dec 21, 2015
Have a new debian install on a asus h170m-plus (was going to use ubuntu but didnt support the hardware/software combo i needed)
Install is fine. but during install it didnt see my 1tb raid1 drive..
after reboot, debain boots great, and i can mount the raid drive in the file manager.
I can see it and in mtab it shows up :
"/dev/md126 /media/user/50666249-947c-4e8f-8f56-556b713a6b6a ext4 rw,nosuid,nodev,relatime,data=ordered 0 0"
How can I permanently add this mount point so it is found at boot up at /data...
View 5 Replies
View Related
Dec 22, 2010
I installed Debian 5.0.3 (Backport with .34 Kernel), because my server hardware (Dell PowerEdge R210) needs special firmware and drivers.However, the installation went quite smooth.I put the system on a RAID 1 Array with about 500 GB space.s I said the installation went well, however, it doesn't boot! No GRUB, nothing
View 4 Replies
View Related
Mar 27, 2010
I'm running Karmic Server with GRUB2 on a Dell XPS 420. Everything was running fine until I changed 2 BIOS settings in an attempt to make my Virtual Box guests run faster. I turned on SpeedStep and Virtualization, rebooted, and I was slapped in the face with a grub error 15. I can't, in my wildest dreams, imagine how these two settings could cause a problem for GRUB, but they have. To make matters worse, I've set my server up to use Luks encrypted LVMs on soft-RAID. From what I can gather, it seems my only hope is to reinstall GRUB. So, I've tried to follow the Live CD instructions outlined in the following article (adding the necessary steps to mount my RAID volumes and LVMs). [URL]
If I try mounting the root lvm as 'dev/vg-root' on /mnt and the boot partition as 'dev/md0' on /mnt/boot, when I try to run the command $sudo grub-install --root-directory=/mnt/ /dev/md0, I get an errors: grub-setup: warn: Attempting to install GRUB to a partition instead of the MBR. This is a BAD idea. grub-setup: error: Embedding is not possible, but this is required when the root device is on a RAID array or LVM volume.
Somewhere in my troubleshooting, I also tried mounting the root lvm as 'dev/mapper/vg-root'. This results in the grub-install error: $sudo grub-install --root-directory=/mnt/ /dev/md0 Invalid device 'dev/md0'
Obviously, neither case fixes the problem. I've been searching and troubleshooting for several hours this evening, and I must have my system operational by Monday morning. That means if I don't have a solution by pretty early tomorrow morning...I'm screwed. A full rebuild will by my only option.
View 4 Replies
View Related
Dec 2, 2009
I have 2 drives and wish to use the following partition setup.
sda1 /boot 1GB ext4
sda2 / 50GB ext4 raid 0
sdb1 / 50GB ext4 raid 0
Unfortunately only Ubuntu server has the option to make a raid in the install. Can somebody point me to a howto on something like this up. I'm thinking I will want to install onto a sdb2 set up the raid and copy the file system to the raid.
View 2 Replies
View Related
Dec 1, 2010
I performed an install using the 5.0.6 amd64 netinst cd on a dual opteron server with an Areca ARC-1110 4-port SATA hardware RAID card. I have 2 250GB drives set up as RAID 1. The debian install saw it as only one drive, just as it should. Install went smoothly, but on reboot, the system would not load.
I did some research and tried a couple of things with no luck. Like adding a delay in the grub command. It jus sits at loading system for a while then times out and loads busy box. Just to check things out, I booted into an Ubuntu live-cd and mounted the volume. The file system is there and all of the necessary files. How to use one of these cards successfully?
View 2 Replies
View Related
Apr 25, 2010
I have set up a RAID1 array and am trying to test if its is set up correctly/if errors are detected, reported and recoverable.
Started up the mdadm monitor with:
Code:
I set the RAID array to a faulty state by doing:
Code:
However I do not get any problem reports to my e-mail address. When I test the mdadm I get this result:
Code:
When I look in the postfix folder, sure enough.. there is no main.cf file there... but there IS a file named 'master.cf'. I am running Ubunto 9.10 with default components - have postfix but no sendmail.
View 2 Replies
View Related
Aug 2, 2011
I have a situation where I need to setup some sort of storage solution with Raid 5 redundancy. I was thinking that Linux would be the way to go but I am not certain what platform would be best.
I was thinking running two SATA RAID controllers to get me somewhere between 4 - 6 TBs in Raid 5. I am very comfortable with ubuntu now and would love to use it. I have also used FreeNas in the past but would love to have a full OS on the machine if at all possible.
View 2 Replies
View Related
Mar 11, 2011
I am just getting into the Raid world with my home server. what i have:
Asus M3A78-CM (may be wrong, cant remember for sure) Motherboard with 6 Sata2 Connectors
3 2TB Sata2 Drives
2GB of DDR2 Ram set in bank A
AMD Dual Core (i'll know what it is when i get the system booted)
What i am trying to figure out is when i build this system, I will put in the HDD's into Sata Ports 1-3 and in the BIOS i will setup a RAID 5 Array. Now, do i just format and partition like normal? Would it be better to have a smaller, and better performing Sata2 for the system so i can have the raid be only for file storage?
In what i have read about this, i need to format each drive into two partitions at least but i do not know what needs to be done, The guides just vaguely say something about two partitions and then move on (trick of the trade? keep all of us in the dark? LOL) I would like to have a raid for my storage and a faster disk for the OS and home directories. But if it cannot be done then thats how it is. So do i put the TB drives in Sata Ports 4-6 and my other drive in Sata Port 1?
View 8 Replies
View Related
Feb 10, 2011
I got a motherboard asus m2a-vm that has support for raid 0, 1 and 10 and I was just curios if anybody has used a fakeraid for raid 0 with ubuntu. If so did it work out as planed?
View 4 Replies
View Related
Jul 21, 2009
I looking to setup a CentOS server with RAID 5 i was wondering what the best way to set it up and How with the ability to add more HDD to the RAID system later on if needed?
View 1 Replies
View Related
Dec 9, 2009
I'm setting up a backup server using Centos 5.3 and an Adaptec 5805 raid card and discovered that I can't use a raid setup that is over 2TB in size as the boot drive. What I eventually did was set up 2 raids on the same set of 4 drives so that I have a 200Gb 'drive' for booting and a 2.6TB 'Drive' for data. I want to keep the OS in the raid setting so I have some protection instead of having a dedicated stand alone drive for the OS. This will be for a company wide backup server and I want to minimize the possibility of drive failure for the OS as well as the Data.
I was able to install and reboot the system and everything seemed to be working but after some working on it a bit I did a reboot and wound up with a non-booting system. I can boot to the rescue mode with the install dvd and mount the original system and I even tried to reinstall the grub setup per instructions I found on the net but still I get a system that hangs up after it asks if I want to boot from the CD. If I take out the CDROM option from the boot lineup in the bios I stop at the same place minus the boot cd prompt.
I'm guessing it is something to do with one of the raid drives being over 2TB but I'm booting from a 200gb sized raid so I'm really at a loss for what to do next??
Is what I've described the correct way to handle booting up with a large raid or is there another way to reconfigure the drives as one big 2.8TB raid and use something other than grub to boot to it?
View 5 Replies
View Related
Nov 26, 2010
I have installed Ubuntu on my m1530 since 8.04 and currently dual boot Win7 and 10.10. I would like to dual boot on my PC, but I have run into a problem. I am not a pro at Ubuntu, but this problem I can not solve by reading forums like I have in the past.
I realize this is a common problem, but I have noticed people having success.
I have a M4A87TD EVO MB with two Seagate drives in Raid 0. (The raid controller is a SB850 on that MB) I use the raid utility to create the raid drive that Windows7x64 uses. I have 2 partitions and 1 unused space. Partition 1 is Windows, partition 2 is for media, and the remaining unused space is for Ubuntu.
I am running ubuntu-10.10-desktop-amd64 off a Cruzer 16GB flash drive that was installed via Universal-USB-Installer-1.8.1.4.
My problem like so many others is that when I load into Ubuntu, gparted detects two separate hard drives instead of the raid. I read that this is because kpartx is not installed on 10.10. I then went in LiveCD mode and downloaded kpartx from Synaptic Manager. Gparted still reported two drives. I opened terminal and run a few commands with kpartx. I received an error. (Forgive me I didn't write it down, but I believe it said something about a communication error. I will try again later and see.)
Currently I am reflashing the Cruzer with a persistence of 4GB. I am not familiar with this process, but I understand that my LiveCD boot will save information I download to it. I decided to try this method because I was going to install kpartx and reboot to see if this made a difference.
I am looking for any suggestions on a different method or perhaps someone to tell me that the raid controller or some hardware isn't supported. I did install ubuntu-10.10-alternate-amd64 on my flash drive, but fail to get past detecting my CD-ROM drive since it's not plugged in. If this method is viable, I will plug it in. I also watched the ..... video were a guy creates Raid 0 with the alternated CD, but it wasn't a dual boot and didn't use a raid controller from a MB.
View 6 Replies
View Related
Sep 19, 2014
I am running a 14 disk RAID 6 on mdadm behind 2 LSI SAS2008's in JBOD mode (no HW raid) on Debian 7 in BIOS legacy mode.
Grub2 is dropping to a rescue shell complaining that "no such device" exists for "mduuid/b1c40379914e5d18dddb893b4dc5a28f".
Output from mdadm:
Code: Select all  # mdadm -D /dev/md0
  /dev/md0:
      Version : 1.2
   Creation Time : Wed Nov 7 17:06:02 2012
     Raid Level : raid6
     Array Size : 35160446976 (33531.62 GiB 36004.30 GB)
   Used Dev Size : 2930037248 (2794.30 GiB 3000.36 GB)
    Raid Devices : 14
[Code] ....
Output from blkid:
Code: Select all  # blkid
  /dev/md0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs"
  /dev/md/0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs"
  /dev/sdd2: UUID="b1c40379-914e-5d18-dddb-893b4dc5a28f" UUID_SUB="09a00673-c9c1-dc15-b792-f0226016a8a6" LABEL="media:0" TYPE="linux_raid_member"
[Code] ....
The UUID for md0 is `2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` so I do not understand why grub insists on looking for `b1c40379914e5d18dddb893b4dc5a28f`.
**Here is the output from `bootinfoscript` 0.61. This contains alot of detailed information, and I couldn't find anything wrong with any of it: [URL] .....
During the grub rescue an `ls` shows the member disks and also shows `(md/0)` but if I try an `ls (md/0)` I get an unknown disk error. Trying an `ls` on any member device results in unknown filesystem. The filesystem on the md0 is XFS, and I assume the unknown filesystem is normal if its trying to read an individual disk instead of md0.
I have come close to losing my mind over this, I've tried uninstalling and reinstalling grub numerous times, `update-initramfs -u -k all` numerous times, `update-grub` numerous times, `grub-install` numerous times to all member disks without error, etc.
I even tried manually editing `grub.cfg` to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `(md/0)` and then re-install grub, but the exact same error of no such device mduuid/b1c40379914e5d18dddb893b4dc5a28f still happened.
[URL] ....
One thing I noticed is it is only showing half the disks. I am not sure if this matters or is important or not, but one theory would be because there are two LSI cards physically in the machine.
This last screenshot was shown after I specifically altered grub.cfg to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `mduuid/2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` and then re-ran grub-install on all member drives. Where it is getting this old b1c* address I have no clue.
I even tried installing a SATA drive on /dev/sda, outside of the array, and installing grub on it and booting from it. Still, same identical error.
View 14 Replies
View Related
Nov 3, 2010
I just installed Debian 5.0.4 successfully. I want to use the PC as a File Server with two Drives configured as a RAID 1 device. Everything with the RAID device works fine, the only question I have belogs to the GRUB 0.97 Booloader. I would like to be able to boot my Server even if one of the disks fail or the filesystem containing the OS becomes corrupt, so I configured only the data partitions to be a RAID 1 device, so on the second disk should be a copy of the last stable installation, similar to this guide:[URL]...
[Code]...
View 3 Replies
View Related
Jun 2, 2011
I have a Dell PowerEdge SC1425 with two SCSI-disks, that I have tried installing Debian Squeeze on. This machine has previously been running Lenny (with grub 1), and the upgrade was done by booting a live-cd, mounting the root partition and moving everything in / to /oldroot/, then booting the netinstall (from USB), selecting expert install and setting up everything (not formatting the partition).
Both disks have identical partition tables:
/dev/sda1 7 56196 de Dell Utility
/dev/sda2 8 250 1951897+ fd Linux raid autodetect
/dev/sda3 * 251 9726 76115970 fd Linux raid autodetect
/dev/sda1 and /dev/sdb1 contain a Dell Utility, that I have left in place.
/dev/sda2 and /dev/sdb2 are members of a Raid-1 for swap.
/dev/sda3 and /dev/sdb3 are members of a Raid-1 for / formatted with reiserfs.
After installation, grub loads, but fails with the following message:
GRUB loading.
Welcome to GRUB!
error: no such disk.
Entering rescue mode...
grub rescue>
Doing "ls" shows:
(md0) (hd0) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1)
I can do the following to get grub to boot:
set root=(hd0,3)
set prefix=(hd0,3)/boot/grub
insmod normal
normal
This will bring me to the grub menu, and the system boots.
It appears that grub has only found md0, which I believe is the swap partition, because ls (md0)/ returns error: unknown filesystem. I have installed grub to both sda, sdb and md1, and tried dpkg-reconfigure grub-pc and dpkg-reconfigure mdadm, as well as update-grub.
I manually added (md1) /dev/md1
to /boot/grub/device.map, but still no result.
I have run the boot_info_script.sh, but unfortunately I cannot attach the RESULTS.txt, because the forum aparently does not allow the txt-extension. Instead I have placed it here: [URL]. I am tempted to go back to grub-legacy, but it seems I am quite close to getting the system working with grub2.
View 6 Replies
View Related
Jul 19, 2011
I have DL120 Proliant server that has a P212 raid card, if I install Lenny it works fine however I need squeeze. If I upgrade or install a new version of squeeze the raid controller is no longer visible. I have done some snoopping and it seems as though the ciss drivers have been replaced by the hpsa drivers but I still cant seem to get the raid card recognised any body got any tips ?
View 1 Replies
View Related