CentOS 5 Hardware :: Corsair F80 In RAID 0 Configuration With P6x58D-E Won't Boot After 5.4 Installation
Dec 19, 2010
Configuration:
Centos 5.4
2 Corsair F80 RAID 0 Intel Storage Matrix
ASUS P6x58D-E
Stripe Size = 128kB
Tried to run Centos 5.5 in Dual Boot Configuration with Windows 7. Windows Y installed without issues within minutes (amazing performance with the SSD's). Installed Centos 5.4 several things that wouldn't work right:
- wouldn't recognize the NTFS partitions. So I decided just to install Centos on the box. Completed installation and rebooted but it wouldn't boot up after the installation. Even in NON RAID configuration it would not boot of the SSD. Replaced SSD's with 2 Seagate Barracuda's 1TB in RAID 0 configuration and all went well.
View 1 Replies
ADVERTISEMENT
Nov 17, 2010
I am trying create a dual boot of Ubuntu and Windows 7 on my new 60 gig Corsair SSD drive. Windows 7 installs without a problem but when I try to install Ubuntu Desktop 10.10 it will not detect my SSD drive during the installer or when I start Ubuntu off the CD/USB. So I downloaded 10.04 and it will see the SSD drive but once it start copying files it gives me this error: " [Errno 30] Read-only file system: '/target/bin'." I have tried may different downloads of 10.10 and 10.04 along with many different burned disks burned at the slowest speeds. I have tried it many times off different flash drives with different downloaded isos.
I did MD5 test my iso and it was fine, I burned the disk at 8x using OSX 10.6 disk utility, and checked the CD before pressing install and came back with no errors. I ran a check on my RAM and it passed with no errors. I tired the alternate ubuntu 10.4.1 installer, hangs when installing the base system at 17% when extracting libacl1... Then I get the error "The debootstrap program exited with an error (return value 1). Check /var/log/syslog or see virtual console 4 for the details." This CD was burned at 8x, MD5 is okay, and CD had no errors. Windows 7 was able to see my SSD and fully install to it. My system is a custom built and here are it's specs AsRock mobo (I had an ASUS but it fried so instead of buying a good mobo I got a temp till I can upgrade)
AMD 5200+ AM2
2 gigs Corsair XMS ram DDR2
BFG Nvidia 8800 GTS 6XX of memory (I cannot remember off the top of my head)
ASUS sata DVD burner
Corsair Force Series 60GB SSD
Thermaltake Toughpower XT 775W
View 1 Replies
View Related
Nov 26, 2010
I have installed Ubuntu on my m1530 since 8.04 and currently dual boot Win7 and 10.10. I would like to dual boot on my PC, but I have run into a problem. I am not a pro at Ubuntu, but this problem I can not solve by reading forums like I have in the past.
I realize this is a common problem, but I have noticed people having success.
I have a M4A87TD EVO MB with two Seagate drives in Raid 0. (The raid controller is a SB850 on that MB) I use the raid utility to create the raid drive that Windows7x64 uses. I have 2 partitions and 1 unused space. Partition 1 is Windows, partition 2 is for media, and the remaining unused space is for Ubuntu.
I am running ubuntu-10.10-desktop-amd64 off a Cruzer 16GB flash drive that was installed via Universal-USB-Installer-1.8.1.4.
My problem like so many others is that when I load into Ubuntu, gparted detects two separate hard drives instead of the raid. I read that this is because kpartx is not installed on 10.10. I then went in LiveCD mode and downloaded kpartx from Synaptic Manager. Gparted still reported two drives. I opened terminal and run a few commands with kpartx. I received an error. (Forgive me I didn't write it down, but I believe it said something about a communication error. I will try again later and see.)
Currently I am reflashing the Cruzer with a persistence of 4GB. I am not familiar with this process, but I understand that my LiveCD boot will save information I download to it. I decided to try this method because I was going to install kpartx and reboot to see if this made a difference.
I am looking for any suggestions on a different method or perhaps someone to tell me that the raid controller or some hardware isn't supported. I did install ubuntu-10.10-alternate-amd64 on my flash drive, but fail to get past detecting my CD-ROM drive since it's not plugged in. If this method is viable, I will plug it in. I also watched the ..... video were a guy creates Raid 0 with the alternated CD, but it wasn't a dual boot and didn't use a raid controller from a MB.
View 6 Replies
View Related
Mar 24, 2011
If you want, skip straight to the 'QUESTION' at the end of my post & refer to the 'EXPLANATION' later. EXPLANATION: Using Debian 6.01 Squeeze 64-bit. Just put together a brand new 3.3Ghz 6-core AMD. I had a nightmare with my Highpoint 640 raid controller, apparently because Debian Squeeze now handles raid through sysfs rather than /proc/scsi. The solution to this, of course, is to recompile the kernel with the appropriate module for /proc/scsi support. So I thought "screw that" and I've yanked out the raid card & went with Debians software raid. This allowed me to basically complete my mission. The raid is totally up and running, except for one final step... I can't get the raid to automount at boot.
My hardware setup;
- Debian is running totally on a 64Gb SSD. (sda)
- I have 3x 2Tb hard drives used for storage on a raid 1 array (sdc,sdd,sde)
[Code]....
View 2 Replies
View Related
Sep 15, 2010
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
View 4 Replies
View Related
Feb 23, 2010
I've started this new topic because my original one was a bit old and my circumstances have changed a bit. I'm trying to install Centos on a system with a adaptec 5805 hardware raid card, real raid not faked, and am having problems when a large, over 2tb, sized array is partitioned. I have two raids on the card, one is a raid 1 with 2 500gb drives that I am installing the OS to and the other is 4 1tb drives in a raid 5 so I wind up with around a 2.7tb array. I was having all kinds of problems when I left both arrays in place during the install but a firmware upgrade on both the motherboard and the raid card seemed to improve it to the point where as long as I do not tell the install to format the 2.7tb array in any way I can get the install finished and it will boot up and work. If I touch the array in any way during the install I get a system hang after a question about booting from the cdrom drive.
My final test was to do a fresh install and remove and re add both arrays so they were clean and leave the 2.7tb array completely out of the install process. Then after install I set up a gpt partition using parted from the command line, I didn't even put a file system on it. When I rebooted I got the same hang up after a cdrom boot prompt. I then used the install dvd to boot into rescue mode and wiped the partition on the 2.7tb array using
dd if=/dev/zero of=/dev/sdb bs=1024 count=1024
which made the drive out to be empty. Now the system boots up fine again. The only thing I can't tell, since it goes by too quickly, is if there is any kind of cdrom prompt like when the array is partitioned. I'm wondering if when the array is partitioned as a gpt is it possible it looks like a cdrom and thus stopping the boot somehow? Is there any other way of partitioning a 2.7tb array so I can see if taking gpt out of the equation will fix the problem??
View 19 Replies
View Related
Apr 18, 2011
When I set up my current workstation (now at CentOS 5.6) I connected the two SATA drives in a RAID 1 configuration. Later, I realized I had a spare EIDE drive , so installed it, partitioned, and added it as a hot spare using "mdadm -a". So, now I have three disk drives doing the work of one.I updated /etc/mdadm.conf using the output from mdadm --examine --scan, and the resulting lines are:ARRAY /dev/md0 level=raid1 num-devices=2 UUID=1f00f847:088f27d2:dbdfa7d3:1daa2f4e spares=1ARRAY /dev/md1 level=raid1 num-devices=2 UUID=f500b9e9:890132f0:52434d8d:57e9fbfc spares=1I also updated the /boot/grub/device.map file to read:
(hd0) /dev/sda
(hd1) /dev/sdb
(hd2) /dev/hda
[code]....
View 4 Replies
View Related
Apr 22, 2010
I've a database-server (IBM x3650 M2) with about 3 TB Data on SAN (Hitachi) with lvm top of softraid (RAID1) based on multipath (2 SAN-boxes in different buildings). After booting the server, multipath starts, but no md starts the mirror. The same configuration with SLES 10 works.
cat /proc/mdstat
Personalities : [raid1]
md13 : active raid1 dm-17[1] dm-14[0]
[code]....
View 5 Replies
View Related
Jan 8, 2011
I have a pre-existing setup with Windows XP Professional and CentOS5.5 on a dual boot setup with the Linux drive setup as the primary drive hosting the grub menu.
I am replacing these machines with new updated ones and they have windows setup on a RAID0. I think it would be easiest to follow my previous setup and move the RAID to secondary SATA ports and put the linux drive on the primary SATA port, or should I just change the boot order in the BIOS to have the secondary linux drive boot first?
can I move a RAID setup to secondary controller ports without breaking the RAID?
View 1 Replies
View Related
Mar 18, 2010
As the topic says: is it possible to set up a software RAID 1 during the installation of CentOS 5.4? Or is it only possible to set up RAID 0?
In case that RAID 1 is possible: could someone be so kind and tell me how to set RAID 1 up? Which checkboxes to check and which partitions to make? I have 2 equal HDDs and use the graphical installation of CentOS.
View 7 Replies
View Related
Aug 3, 2010
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
View 3 Replies
View Related
May 9, 2010
I'm building up a server with a friend. I'm in charge of the hardware and my friend about software (linux distribution, etc). after searching through many models of motherboards I've found one candidate: Asus P6X58D Premium. Then, I realized that probably, there isn't any driver for that motherboard. So, I've browsed to [URL] then on the download tab and discovered there was: "DOS, Netware, Unixware, Windows 2000 to 7 x64 and others" but no Linux option
So, I've searched in other similar motherboards: for example P6T Deluxe. At the Linux option, the drivers were about audio issues....So, does really need a distribution x64 for server based on Debian, Ubuntu, etc, these drivers to work correctly on Asus P6X58D Premium?
View 1 Replies
View Related
Dec 13, 2010
I try to install Cent OS 5.5 to Dell PowerEdge 650 . Raid controller is CREC ATA-100/4ch. However some drivers are found in DELL site, no suitable driver for Cent OS 5.5 is not found. Is it possible to install CENT OS 5.5 to this server ?
View 2 Replies
View Related
Jan 8, 2011
I am looking to build a server with 3 drives in RAID 5. I have been told that GRUB can't boot if /boot is contained on a RAID arrary. Is that correct? I am talking about a fakeraid scenario. Is there anything I need to do to make it work, or do I need a separate /boot partition which isn't on the array?
View 3 Replies
View Related
Dec 10, 2009
I am going to be using CentOs 5.4 for a home storage server. It will be RAID6 on 6 x 1TB drives. I plan on using an external enclosure which is connected via two SFF-8088 cables (4 drives a piece). I am looking to try and find a non-RAID HBA which would support this external enclosure and allow to use standard linux software raid.
If this is not an option, I'd consider using a hardware based raid card, but they are very expensive. The Adaptec 5085 is one option but is almost $800. If that is what I need for this thing to be solid then that is fine, I will spend the money but I am thinking that software raid may be the way to go.
View 3 Replies
View Related
Dec 22, 2010
I installed Debian 5.0.3 (Backport with .34 Kernel), because my server hardware (Dell PowerEdge R210) needs special firmware and drivers.However, the installation went quite smooth.I put the system on a RAID 1 Array with about 500 GB space.s I said the installation went well, however, it doesn't boot! No GRUB, nothing
View 4 Replies
View Related
Jan 12, 2010
I'm a long time windows user and it-tech but I have long felt that my geek-levels were too low so I installed Ubuntu last week (9.10 x64). Hopefully I can make it my primary OS. I have two 80GB drives in RAID-1 from my nforce raid controller, nforce 570 chipset. Then a 320 GB drive where I placed ubuntu and it's also where grub placed itself. And also a 1TB drive.
When grub tries to boot XP I get the error message: "error: invalid signature" I checked the forum as much as I could and tried a few things, but no change.
Drives sdc and sdd are the two drives in raid, they are matched exactly, but detected as different here. I really think they should be seen as one drive.
how I can make grub work as it should?
Also, if/when I need to make changes to grub, do I really have to use the live CD?
Code:
============================= Boot Info Summary: ==============================
=> Grub 1.97 is installed in the MBR of /dev/sda and looks on the same drive
in partition #1 for /boot/grub.
=> Windows is installed in the MBR of /dev/sdb
=> Windows is installed in the MBR of /dev/sdc
[Code].....
View 2 Replies
View Related
Sep 15, 2010
I am using the 10.04.1 x64 Kubuntu live CD to install Kubuntu on my FakeRAID 0 array, I tell it not to install grub as i know it is still currently broken. the install goes flawlessly. However on first boot using my live grub CD unless i tell my computer to point to the CD it will hang (which it is told to boot from CD first so i'm not sure why it does.) When i tell it to boot to Linux, it will not boot saying the kernel is missing files (to much to sadly list, all i do not understand) then offers me a terminal to input "help" into for a list of Linux commands. Windows 7 pro x64 works just fine CD was downloaded VIA P2P if it matters
View 6 Replies
View Related
Apr 17, 2011
I have (3) 60GB SSD. I want to use (2) in RAID 0 for Ubuntu and (1) alone for Windows 7.
What is the best way to get about this? I know how to create the RAID 0 array with the (2) disks and install Ubuntu on it. But how do I get the RAID array with Linux on it to dual boot with Windows 7?
Linux - RAID 0 (2) SSDs
Windows 7 - (1) SSD
View 3 Replies
View Related
Aug 20, 2011
I have a Asus P5Q-E with SATA Raid (fake Raid ?) with 2 250Gb disks on Raid 0 and have Windows 7 installed. Freed some space and want to install Ubuntu 11.04 alongside Win7. Install choosing the "alongside" option. Installed...reboot...grub rescue (error no such device). Win7 disk recovery and got access to Win7 again. Where is Ubuntu ? Nowhere... the free space continues free (not partitioned) and found no evidence of ubuntu install. Ok...second try with "something else" option, created /boot /swap and / partitions and installed grub on /boot partition (this time I want to use windows boot to choose...and if it fails the instalation again, at least I got Windows). Installed...reboot...grub rescue.
Again, the free space wasn't partitioned and the installer spend some time doing something but there is no evidence of ubuntu install. Also grub seems to be installed on first disk (even when I choose /boot as device bootloader installation) - I've used EasyBCD to create an entry on win bootloader and when I choose that got grub> (not grub rescue>).
Can Ubuntu be installed on such conditions ?
View 1 Replies
View Related
Jul 6, 2010
I'm starting to spec out a new GNU/Linux boxen. I'm considering the ASUS P6X58D Premium motherboard. Has anyone had any experience with this board in terms of compatibility with openSUSE 11.2/11.3? I'm mainly concerned about:
on-board sound: Realtek ALC889
ethernet: Marvell 88E8056
new interfaces: SATA3, USB3.0
being able to use: AHCI (which I had to turn off on a netbook, Acer Aspire 1410, for hard drive stability), 64-bit kernel I'm planning to use an i7-930 or i7-950 (should the price drop in August as rumored) with 6 GB DDR3 1333 RAM (not planning to overclock, as yet). The graphics card will probably be an NVidia GT240 or so (budgeting about $100 for the graphics card).
Also, I'm hoping to move some old IDE drives over using IDE->SATA 1 bridge cards. Any advice on this? Or should I just get a new SATA 3 drive? For the optical drives (DVD RW/CD RW), I'd like to use bridge cards since there's no benefit for the higher speed SATA interface, or is there?
P.S. For the naive question: As I haven't dealt with 64-bit installs before... I would need the 64-bit kernel for full access to all the extra address space, correct? After that, I can still run 32-bit apps, with a slight speed penalty?
View 1 Replies
View Related
Dec 9, 2009
I upgraded from F10 to F12 using preupgrade. The upgrade itself completed with no errors, but I'm unable to boot afterward.
Symptoms:...Grub starts, the initramfs loads, and the system begins to boot. After a few seconds I get error messages for buffer i/o errors on blocks 0-3 on certain dm devices (usually dm0 and dm2, but I can't get a shell to figure out what those are). An error appears from device-mapper that it couldn't read an LVM snapshot's metadata. I get the message to press "I" for interactive startup. UDEV loads and the system tries to mount all filesystems. Errors appear stating that it couldn't mount various LVM partitions. Startup fails due to the mount failure, the system reboots, and the steps repeat.
Troubleshooting done:...I have tried to run preupgrade again (the entry is still in my grub.conf file). The upgrade environment boots, but it fails to find the LVM devices and gives me a question to name my machine just like for a fresh install. I also tried booting from the full install DVD, but I get the same effect. Suspecting that the XFS drivers weren't being included, I have run dracut to create a new initramfs, making sure the XFS module was included. I have loaded the preupgrade environment and stopped at the initial GUI splash screen to get to a shell prompt. From there I can successfully assemble the raid arrays, activate the volume group, and mount all volumes -- all my data is still intact (yay!). I've run lvdisplay to check the LVM volumes, and most (all?) appear to have different UUIDs than what was in /etc/fstab before the upgrade -- not sure if preupgrade or a new LVM package somehow changed the UUIDs. I have modified my root partition's /etc/fstab to try calling the LVM volumes by name instead of UUID, but the problem persists (I also make sure to update the initramfs as well). From the device-mapper and I/O errors above, I suspect that either RAID or LVM aren't starting up properly, especially since prior OS upgrades had problems recognizing RAID/LVM combinations (it happened so regularly that I wrote a script so I could do a mkinitrd with the proper options running under SystemRescueCD with each upgrade).
I have tried booting with combinations of the rootfstype, rdinfo, rdshell, and rdinitdebug parameters, but the error happens so early in the startup process that the messages quickly scroll by and I just end up rebooting.
System details:4 1-TB drives set up in two RAID 1 pairs. FAT32 /boot partition RAIDed on the first drive pair. Two LVM partitions -- one RAIDed on the second drive pair and one on the remainder of the first drive pair. Root and other filesystems are in LVM; most (including /) are formatted in XFS.
I've made some progress in diagnosing the issue. The failure is happening because the third RAID array (md2) isn't being assembled at startup. That array contains the second physical volume in the LVM volume group, so if it doesn't start then several mount points can't be found.
The RAID array is listed in my /etc/mdadm.conf file and identified by its UUID but the Fedora 12 installer won't detect it by default. Booting the DVD in rescue mode does allow the filesystems to be detected and mounted, but the RAID device is set to be /dev/md127 instead of /dev/md2.
The arrays are on an MSI P35 motherboard (Intel ICH9R SATA chipset) but I'm using LInux software RAID. The motherboard is configured for AHCI only. This all worked correctly in Fedora 10.
View 3 Replies
View Related
Mar 4, 2010
I set up an array of 2 hhd in raid0 using intel ICH7R, and i have a win7 installed on it. I'd like to have a dual boot with karmic, too. I googled a little bit and i found that it's not so easy as i thought. Are there anyone who's experiencing my same situation?
View 1 Replies
View Related
Mar 21, 2010
Is it humanly possible to have a dual boot system and have it in RAID 0
I have three Hard drives.....
The three drives that I have are250GB IDE
1TB Sata
1TB Sata
And I edit . produce a lot of music/videos
What I am looking to do it have my 250Gb IDE drive as my Operating system (partitioned in two or something, one for Windows Another for Ubuntu)
And I would like when I am on either Operating System (ubuntu/windows_7)
its see's ,my computer as having a 2TB HDD (setting the two 1TB HDD's in RAID 0 )
So in Laymans... 250GB for OS's and 2TB (2 x 1TB HDD) as DATA drives
View 1 Replies
View Related
Mar 24, 2010
Currently I have windows 7 x64 installed on a pair of raided ssd's using an x58 motherboard(ich10r) and I want to duel boot ubuntu x64 (either 9.10 or 10.04) on another harddrive (not part of the raided ssd's). Does anyone know if this can be done? I haven't found anything out there about this. I have tried to install Ubuntu from the CD. It gets to the install screen booting from the CD, but it doesn't let me install or try Ubuntu. I hit enter and nothing happens. I can look through all of the options but can't install or boot into Ubuntu.
View 1 Replies
View Related
Jul 31, 2010
I was trying to dual boot ubuntu on my desktop which has windows 7 already installed on it with raid 0. I have 2 500GB hard drive and when I boot from cd and try to install ubuntu it only detect one of my hard drives and it said I only have 500GB and there is no OS installed on this machine! I look online but I couldn't find any solution!
View 9 Replies
View Related
Dec 2, 2010
I upgraded my Dell T3500 from lucid64 to maverick64. The system cannot boot the current image becuase it gives up waiting for root device. it gives something like:
Code: ---udevd-work[175]: '/sbin/modprobe -bv pci: v00008086d00002822sv00001028s00000293bc01sc04i00' unexpected exit with status 0x0009 ALRT! /dev/mapper/isw_bbfcjjefaj_ARRAY01 does not exist. dropping to a shell! at which point it drops to busybox at the (initramfs) prompt. I cannot boot this kernel in either normal or recovery mode (ubuntu 10.10, Kernel 2.6.35-23-generic)
however, if I select the previous kernel in grub, (2.6.32-26) it will boot fine. This is my main production machine and it hosts my Project Management software for my team.
View 9 Replies
View Related
Mar 20, 2011
There have been many postings on doing Raid 0 setups, and it seems the best way looks like softRaid, but there were some arguments for fakeRaid in dual boot situations. I've seen some posts on dual boot windows/linux in Raid 0, but I was hoping to do a multi-boot using a grub partition, with several Linux distros and Windows 7. There will also be a storage disk for data, but not in the array. From what I gather, I'll need a grub partition which can only reside on one of the two disks, one swap partition on each disk, then the rest I can stripe.
I've got two 73GB WD raptor drives to use for the OS's and programs. I'm just getting my feet wet with the terminal in linux (Ubuntu makes it way too easy to stay in GUI), and the inner workings of the OS, so I have several questions:
Is this going to be worth the effort? Obviously I'm trying to boost performance in boot and run times, but with Grub on a single drive, will I see much gain?
Does this sound like the right methodology (softRAID)? I only have two spare PCI slot's, which don't seem like they would be condusive to hardware raid, but someone who knows more could convince me otherwise.
[Code]...
View 5 Replies
View Related
Sep 1, 2011
I recently upgraded from 10.04 to 11.04 and I now often get boot messages about a degraded raid.
I'm fairly experienced, but I'm confused which raid it is talking about. I have a raid5 array, but I don't boot of that, and it seems fine when I finally get it to boot. Previously, I didn't have any other raid arrays[1], but now I seem to have two others called md126 and md127, they both seem to be degraded. Where did they come from?
[1] I *do* have two 80GB drives that I was booting from in RAID1, but that was a looong time ago, and I have since only booted from one of them. The partition table indeed shows partitions 1 and 5 are raid autodetect and /proc/mdstat shows they are degraded ([U_]). Could it be that this is causing the problem? If so, why has this only started to happen since the upgrade from 10.04 to 11.04?Anyway, perhaps it is a good idea to add in that second disk to the raid1 array. If so, how to do that? Note that, I've also noticed that when I boot and get to the screen when I select from the different kernel versions, I now get a couple of really old ones too - my thought is that these are from the raid1 disk that I stopped using. If I add it to the array, how can I be sure it will mirror in the correct direction?
It could be that I have fairly recently plugged in that second RAID1 disk, after a long time of not having enough spare sata sockets (I switched my RAID5 array from 8 disks to only 3 disks, so suddenly had a lot more spare sockets).
View 9 Replies
View Related
Feb 11, 2010
I clone an drive with CentOs 5.3 from a drive connected to ATA0 device 0 of an ATA controller to an identical drive connected to the same ATA contoller ATA1 device 0. No matter what I do it boots from ATA1 device 0 and I need to be able to control which one it is booted from. When I have puppy linux on one drive and CentOs on the other drive I can control the boot thru the system BIOS either way no matter if puppy is in ATA 0 or 1. So its not a BIOS issue. It appears (to me) to be a grub configuration issue. Since the 2 drives are clones they both have VolGroup00. I think grub loads from the last VolGroup00 found.
Here is my grub.conf file:
# grub.conf generated by anaconda
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
# initrd /initrd-version.img
# boot=/dev/hde
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-128.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00
initrd /initrd-2.6.18-128.el5.img
Here is the Device.map:
# this device map was generated by anaconda
(hd0) /dev/hde
View 3 Replies
View Related