Ubuntu Installation :: Alternate 64bit Iso Install Raid Failure
Aug 27, 2010
UPDATE: decided to reinstall and run the partitioner to get rid of the raid. Not worth dealing with this since seems to be lower level as /dev/mapper was not listing any devices. Error 15 at grub points to legacy grub. So avoiding the problem by getting rid of raid for now. So ignore this post. Found a nice grub2 explanation on the wiki but didn't help this situation since probably isn't a grub problem. Probably is a installer failure to map devices properly when it only used what was already available and didn't create them during the install. I don't know, just guessing. Had OpenSuSE 10.3 64bit installed with software raid mirrored swap, boot, root. Used the alternate 64bit Ubuntu iso for installation. Since partitioning was already correctly setup and the raid devices /dev/md0,1,2 were recognized by the installer, I chose to format the partitions with ext3 and accept the configuration:
Installation process failed at the point of installing grub. It had attempted to install the bootloader on /dev/sda2 and /dev/sdb2. I moved on since it would not let me fiddle with the settings and I got the machine rebooted with the rescue option on the iso used for installing. Now, I can see the root partition is populated with files as expected. dpkg will list that linux-image-generic, headers, and linux-generic are installed with other supporting kernel packages. grub-pc is installed as well. However, the /boot partition or /dev/md1 was empty initially after the reboot. What is the procedure to get grub to install the bootloader on /dev/sda2 and /dev/sdb2, which represent /dev/md1 or /boot?
Running apt-get update and apt-get upgrade installed a newer kernel and this populated the /boot partition. Running update-grub results in a "/usr/sbin/grub-probe: error: no mapping exists for 'md2'". grub-install /dev/md2 or grub-install /dev/sda2 gives the same error as well. Both commands indicate that "Autodetection of a filesystem module failed, Please specify the module with the option '--modules' explicitly". What is the right modules that need to be loaded for a raid partition in initrd? Should I be telling grub to use the a raid module?
I am trying to install Xubuntu (lucid) via the ALTERNATE CD... Everything goes smoothly until the Installer tries to install Software... There it presents me with something like: "There was a problem installing software. You can abort or choose to retry installation by choosing off the menu"
Had problems with hard drives yesterday, wouldn't recognise them after I'd installed Linux. Today I fixed that problem (another thread on here, in the hardware section) but now, every single time the install gets to 75%, it asks me to insert the CDROM and basically hangs.
Hitting enter doesn't try to fire up the CDROM, there's zero noise from it. It did this once yesterday but all other tries worked fine. Today it's failed the 7 different times I've tried, in exactly the same place, each time asking for the CD, even tried re-burning the CD, still failing at the same place. It's just after doing something to APT, then storing something (not very helpful I know). Can get a command prompt by CTRL-ALT_F2, but have no clue as to how to proceed when I get there.
I am trying to install ubuntu 10.04.2 onto a computer (alternate for RAID support). This is a fresh install - I am formatting all drives. Partitioning, setting up RAID, and the beginnings of the instilation appears to go fine. But part way through it "sticks" at a screen asking for a "media change" to the CD which is already in the CD tray. Nothing I do appears to stop this (i.e. clicking 'go back', removing/inserting the CD, etc). I've even tried unplugging the network connection.
The specific error reads: Code: [!!] Install the base system Please insert the disk labeled: 'Ubuntu 10.04.2 LTS _Lucid Lynx_ - release amd64 (20110211.3) in the drive '/cdrom/' and press enter.
Media change <Go Back> <Continue> How do I get past this point? (and yes, the red text is in red).
I tried searching for the MD5 checksum page for 11.04 64bit, both the alternate download version and the desktop download version and found it, but that site seems to be down right now. I also got the feeling that that website was for the daily build and may not have the MD5 checksum for the release version. What the MD5 checksum is for Ubuntu 11.04 64bit: alternate download and desktop versions.
I am trying to install Ubuntu on a nvidia board which, based on the forums I have perused, is a FAKERaid chipset. Originally I tried to install according to https://help.ubuntu.com/community/FakeRaidHowto to no avail. I gave up and am resorting to breaking down my raid setup (thus losing my Windows XP installation) and using Linux's software raid (which seems to be the recommended method) with the Alternate CD install.
I went in to the bios and completely disabled my board's RAID function. I then followed the instructions here: https://help.ubuntu.com/community/In...n/SoftwareRAID , except it popped up with a window advising me "one or more serial ATA RAID configurations have been found. Do you wish to activate these RAID devices?" which that page didn't mention. I believe I selected "Yes". Setting up the partitions I made (from beginning of disk to end of disk) a RAID1 200MB /boot partition, a RAID0 / partition (90GB?), free space not set up in RAID (approx 20GB for each SATA drive to later install Windows on [if that's even possible]) & then the SWAP partition 2GB at the end of one of the disks (is it OK that this one wasn't in RAID?).
The installation completed, as far as I can tell, without a hitch. Except also of note: my network card doesn't work in the installer (gives an error message about unable to configure dhcp) so i'm not connected to the internet. Then I restart and the following appears: grub loading: error: biosdisk read error. Followed by what appears to be the Ubuntu loading splash (just a small white shape in the middle of the screen), and then:
on my pc, that was running WinXP, I thought of installing Ubuntu. (I did install linux already a few times in the past years and use it on another couple of pcs) But something went wrong. This machine has 2 x 200MB maxtor drives, in raid 0 configuration, supported by the motherboard Nvidia chipset, and working well in Windows. When ran the live Ubuntu 10.04 cd, gparted was not able to access the drives in raid configuration, until I installed the mdadm and kpartx packages then the existing data became visible. So after that initial moment I thought all was ok and proceeded to install Lucid on the machine, dual booting with Windows. I did partition manually so that in my 400GB raid drive there is an 80GB NTFS partition with WinXP, a 90GB extended partition for Linux Ext4 and Swap and then a last NTFS 200GB partition for data. All went well, but now on restarting the computer nothing happens, nothing loads, Grub is not showing, and it looks like I cannot launch Linux nor Windows. All the data from WinXP and the Ubuntu installation seems to be on the disks but the pc is just not booting. I suppose the problem is with the raid configuration that is not handled properly during the installation, but is there anything that I can do now, apart from reinstalling Windows Xp or installing Ubuntu in a non raid configuration?
When I attempted to install 64-bit Fedora from my DVD, it failed while installing packages with the unhandled exception:
" Traceback: File /usr/lib/anaconda/users.py, line 163, in setRootPassword, self.admin.setpassUser(rootuser,cryptPassword(pass word,algo=algo),True) File /usr/lib/anaconda/instdata.py,line 171, in write,algo=self.getPassAlgo()
[code]...
I know Python quite well but I have no idea why None is being passed as the first argument to self.admin.setpassUser when a non-None value is needed. The other curious thin is that the trace does not show the call to setpassUser() from the entry after it but rather to 'getPassAlgo(). Needless to say this but has kept me from installing Fedora 11 on my 64-bit machine.
I was finally able to install Fedora 11 x64 after choosing to only install packages from the repository on the install DVD. Prior to that when I had chosen tio install from the default online repositories, the install itself failed with a Python exception ( see my other post ). Now, however, once I boot after the install I eventually receive a kernel panic message, and failure. The exact same thing happened with CentOS 5.3 x64 after a flawless install. So unless someone knows what might be going on I will assume that Fedore, Red hat, and offshoots for x64 bit systems are just not for me. I have been able to successfully install the latest Mandriva and SUSE x64 Linux distros so whatever Red Hat/Fedora has done just does not work on my system.
I have a client with a pair of Supermicro 6025B-T servers that he wants to have Ubuntu 10.10 64bit Server running on for VM/Cloud experimenting. He needs these to be set up RAID 10. I can go into the Adaptec utility and make the array and make it bootable and get to the point in the installer where it asks me if I want to use the SATA array - then it gets to the partitioning screen and the array is nowhere to be found.
I am going to setup a new Ubuntu 10.04 using RAID 1 soon. Installation will be via the alternate CD. Older distributions required manually installing Grub to the second drive, to boot if the first drive fails. I found different statements about how this is handled since 9.10.e.g.
Quote:
Install GRUB boot-loader on second drive (this step is not need if you use Ubuntu 9.10)
or
Quote:
installing GRUB to second hard drive depending on your distribution
> grub-install /dev/md0 or > grub-install /dev/sda > grub-install /dev/sdb
is Grub2 automatically installed in all RAID drives using alternate CD 10.04 like executing sort of "grub-nstall /dev/md0" during the installation ?
I've tried the Universal USB Installer, but that doesn't support the alternate iso. And if I select the regular desktop one, it screws up the installation when I try to boot.
Unetbootin gives me this error during the cdrom process. It says it can't find copy files from cdrom and stuff. Well of course, there's no cdrom...
i have a Compaq Presario S4020WM 2.0Ghz XP2400 CPU 768Mb RAM 2 40Gb Hdd and a HD raedon 4650 AGB 1Gb Grafics Card
I have tried to install Ubuntu with this CD and it gets past the keyboard detection part and then it tells me i need to get the CD ROM drivers via removable media, i know this is a problem with ubuntu 10.04 because i can install just fine a 8.04 ubuntu.
i don't know what to do.I have tried to install from a USB but my comp is too old for that, i know its not the specific cd because i'v used about 5 different brands of cd just to see if it was the cds i was uesing,
I would just upgrade from 8.04 to 10.04 but i get to the dbus part and the comp starts to run really slowly and eventually colors just show up, i left it for an hour to see if they would go away and fix but they didn't. Iknow my comp CAN run 10.04 because i have done it before, but i uninstalled and now i can't seem to get it to work again.
Fails to insatll from a SD card using USB, it looks for a CD rom when there is none... will not allow me to go on without a CD rom? i need to encrypt my drive? why don't the normal cd do this just like the other linux sysetems? hide it if you have to.
Today I decided to replace my 9.04 install with 10.04. (I did this on a separate hard disk.) As I am a big fan of LVM I used the 'Alternate' install CD. Everything installed fine.
However, upon booting I observed two things: firstly there was no grub menu. No countdown timer, no menu. Just a flickering cursor. After 15 seconds or so I got a message telling me that:
Code: /dev/mapper/bromine-root (My root partition.) does not exist and that it had given up waiting. Finding this kind of strange I tried the alpha of 10.10 --- same again. Hence I have two questions: firstly, where did the nice grub menu go; secondly, what is wrong with LVM and grub these days? At the initframfs prompt I am thrown to there are some LVM utilities and they appear to show my volumes.
Switching back to my old pair of hard disks and everything works as expected (i.e, the hardware is fine and supported by Linux.)
I downloaded the Xubuntu 10.04.2 Alternate Install CD ISO file from http://mirror.anl.gov/pub/ubuntu-iso...10.04/release/
When I checked the md5sum of the downloaded file, however, there was a mismatch.
The md5sum given at both http://mirror.anl.gov/pub/ubuntu-iso...elease/MD5SUMS as well as https:[url].... is 209cfc88be17ededb373b601e8defdee *xubuntu-10.04.2-alternate-i386.iso but running the command,
Code: md5sum xubuntu-10.04.2-alternate-i386.iso generated the following, obviously different checksum for me:
I have a Gigabyte 6A-M61P-S3 with an nVidia chipset, and I'm using the built-in graphics rather than a discrete graphics card. Ubuntu 10.10 and previous releases ran flawlessly on it using the nouveau drivers. I tried doing an upgrade to 11.04 using the Alternate AMD-64 CD, and it seemed to complete successfully, but when it came time to reboot, I had no video output at all after the BIOS screen. This is a test machine, so I went ahead and did a clean install using encrypted LVM with the Alternate AMD-64 CD after confirming that 64-bit 11.04 ran fine using the Live CD.
The installation went fine, but the first reboot flashed a brief "error: no video mode activated" and then I lost all video output. Subsequent reboots didn't give me any error message, but I had no video output. I suspect there are some boot parameters that would have gotten the nouveau driver working, but I wanted to try Unity, so I rebooted to get the Grub menu, chose Recover Mode, selected failsafe graphics, and got to the Desktop, then installed the proprietary nVidia driver (current) and once I rebooted everything was golden.
is there any way to do a 11.04 Alternate Command Line Install without Internet Connection? I try to install Ubuntu on a Internet-Tablet, wich has no Ethernet-Port and I don't know how to get Wifi to work during Alternate-Install. At previous Ubuntu versions it was possible to let network be unconfigured and install completely from CD or USB-Stick. Isn't this possible in current versions?
I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
when i try to boot the 11.04 64-bit alternate.iso i get the following message, after it says that isolinux blabla is loaded: EDD: Error 8000 reading sector 2855 and when i remove the cd it says: gfx.c32: not a COM32R imageand then there is a grub-shell.
I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it.I then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says : Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output errormdadm: Notenough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:
root@warren-P5K-E:~# mdadm --detail /dev/md0 /dev/md0: Version : 0.90
Recently while using a Highpoint 2310 (raid 5) I lost the mother board and CPU. I had to reinstall Centos and found it needed to initialize the array to function. Total loss of date. Question: If I use a true hardware card (3ware 9650se) and experience a serious hardware loss or the C drive can the card be installed with the drives on a new motherboard and function without data loss even if the OS must be reinstalled.
Recently, one the SMART utility said that one of the drives had failed and another drive was about to fail. I downed the box and hooked them up to my windows machine to run sea tools on them (They are all seagate drives). Sea Tools said that the drives were fine, while ubuntu said they were failing/dead. Yesterday I decided to try to fix one of the drives in the raid. I turned the server off, took the failed drive out, and restarted. Of course the raid didn't work because only 2 of the 3 drives were there, however it had been working w/ only 2 of the 3 drives for a couple months now (I'm a lazy college student). I turned it back off and back on with the drive there just to see if I could get the raid up again, but I havn't been able to get it to go. So far I've tried:
Code:
mdadm --assemble /dev/md0 /dev/sd[b,c,d] mdadm: no recogniseable superblock on /dev/sdb mdadm: /dev/sdb has no superblock - assembly aborted
[code]....
I'm looking for a way to trick the raid into working with just 2 drives until I can warranty the seagate and buy an external 1.5 TB drive to use as another backup. how to remove the bad drive from the array and replace it with a fresh drive, without data loss.
I have a fileserver which is running Ubuntu Server 6.10. I had a RAID5 array consisting of the following disks:
Code: /dev/sda1 /dev/sdb1 /dev/sdd1 /dev/md0 -
the raid drive for the above three disks. The sda1 disk has failed and the array is running on 2 of 3 disks
/dev/sdc (OS disk) /dev/sde (new 2tb disk - unused) /dev/sdf (new 2tb disk - unused)
My plan was to rebuild the array using the two new disks as RAID1. Would the best way to do this be to create a new RAID1 disk on /dev/md1 then copy all data over from /dev/md0? Also - this may sound stupid but since all 3 drives in md0 are identical i'm not sure physically which disk is bad. I tried disconnecting each disk one-by-one then rebooting but the system doesn't appear to want to boot without the bad drive connected. I've already failed the disk in the array with mdadm but i'm unsure of how to remove it properly.
Based on the reading I've done over the past 48 hours I think I'm in serious trouble here with my RAID 5 array. I got another 1 TB drive and added to my other 3 to increase my space to 3 TB...no problem.
While the array was resyncing...it got to about 40%, I had a power failure. So I'm pretty sure it failed while it was growing the array...not the partition. Next time I booted mdadm didn't even detect the array. I fiddled around trying to get mdadm to recognize my array, but no luck.
I finally got desperate enough to just create the array again...I knew the settings of my and had seen some people have success with this method. When creating it, it asked me if I was sure because the disks appeared to belong to an array already, but I said yes. The problem is when I created it, it created a clean array and this is what I'm left with.
Code: /dev/md0: Version : 00.90 Creation Time : Sun Sep 5 20:01:08 2010 Raid Level : raid5 Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
[Code]....
I tried looking for backup superblock locations using e2fsck and every other tool I could find, but nothing worked. I tried testdisk which says it found my partition on /dev/md0, so I let it create the partition. Now I have a /dev/md0p1, which won't let me mount it either. What's interesting is gparted reports /dev/md0p1 as the old partition size (1.82 TB)...the data has to still be there, right?
I went to setup my linux box and found that the OS drive had finally died. It was an extremely old WD raptor drive in a hot box full of drives so it was really only a matter of time before it just quit on me. Normally this wouldn't be such a big deal however I had just recently constructed an md RAID5 array of 3 1TB disks to act as an NFS mount for basically all of my important files. Maybe 2-3 weeks before the failure I had finished moving all of my most important stuff onto that array. Now I know that the array is intact. All the required data is sitting on those disks. Since only the OS level disk failed on me I should be able to get a new disk in there, reinstall ubuntu and then rebuild that array. how exactly do I go about doing that with mdadm? Do I create the array from the /dev character devices like when I initially built the array?
1. One of my hdds failed (sda) in software raid 1. I rma'd the hdd to western digital and got another one. Now do I have to format it before putting it in my centos server? If yes, how do I format?
2. Also since sda drive failed, I gotta mark sda as failed in raid. Then remove the sda hdd, and pop in the new hdd for sda? Or do I switch sdb to sda and put the new hdd in sdb's place?
3. After that add it to raid correct, then once raid rebuilds I have to do grub? Can grub be done via ssh only? or do I need to be at the datacenter or get kvm?
4. Last question, I've got a supermicro hotswap hdd case, so do I need to shutdown server while I replace the hdd's? I just want to be sure I do this correctly.
The following is the guide that I will be using, please look at it and let me know if that is the correct procedure: [URL]. Another thing, when the hdd (sda) failed I put it back into raid, but the hdd has bad sectors that is why m replacing it.
I *had* a server with 6 SATA2 drives with CentOS 5.3 on it (I've upgraded over time from 5.1). I had set up (software) RAID1 on /boot for sda1 and sdb1 with sdc1, sdd1, sde1, and sdf1 as hot backups. I created LVM (over RAID5) for /, /var, and /home. I had a drive fail last year (sda).After a fashion, I was able to get it working again with sda removed. Since I had two hot spares on my RAID5/LVM deal, I never replaced sda. Of course, on reboot, what was sdb became sda, sdc became sdb, etc.So, recently, the new sdc died. The hot spare took over, and I was humming along. A week later (before I had a chance to replace the spares, another died (sdb).Now, I have 3 good drives, my array has degraded, but it's been running (until I just shut it down to tr y.
I now only have one replacement drive (it will take a week or two to get the others).I went to linux rescue from the CentOS 5.2 DVD and changed sda1 to a Linux (as opposed to Linux RAID) partition. I need to change my fstab to look for /dev/sda1 as boot, but I can't even mount sda1 as /boot. What do I need to do next? If I try to reboot without the disk, I get insmod: error inserting '/lib/raid456.ko': -1 File existsAlso, my md1 and md2 fail because there are not enough discs (it says 2/4 failed). I *believe* that this is because sda, sdb, sdc, sdd, and sde WERE the drives on the raid before, and I removed sdb and sdc, but now, I do not have sde (because I only have 4 drives) and sdd is the new drive. Do I need to label these drives and try again? Suggestions? (I suspect I should have done this BEFORE failure).Do I need to rebuild the RAIDs somehow? What about LVM?
Following scenario: My server in some data center on a different continent with two disks and software raid 1.
One day I see that a disk failed (for example with /proc/mdstat). Of course I should replace the failed disk asap. Now that I think about it, I am not sure how. What should my email to the data center support guy mention to make sure that guy doesn't replace the wrong disk?
With hardware RAID it is very easy, because the controller usually has some kind of red LED indicator. But what about software raid?