Ubuntu Installation :: Serial ATA RAID Stripe Failed On Install?
Apr 13, 2010
I get this error when trying to install 10.04 "The ext4 file system creation in partition #1 of Serial ATA RAID isw_dceecfcacg_Volume0 (stripe) failed".I have a sony Vaio with 2x256gb SSD running RAID0.
The ext4 file system creation in partition #1 of Serial ATA RAID isw_fjidifbhi_Volume0 (mirror) failed. Not sure what's wrong? How do I solve this? This happens after I enter my login details and click next. Then it fails on the install screen with this error.
I was running Ubuntu 9.04 Desktop on a headless Pentium 4 machine which is our file, mail, web & fax server. The two x 250GB SATA hard disks were in a RAID 1 array with full disk encryption. Ran the 9.10 upgrade via WEBMIN and it failed. I should have known then to copy over everything to a backup disk, but instead I rebooted.
On restart the machine accepted my encryption passphrase but promptly hung with a mountall symbol lookup error - code 127. So I can't start the machine to get at the disks, and using a Live CD is useless as it has no way to open the RAID array to get at the encrypted partitions. Although we have data backed up (as at last night) I'd hoped not to have to rebuild the entire server from scratch. But its looking bad.I have taken one drive out and plugged it into another machine (Hercules), and the partitions show up as /dev/sdb1 /dev/sdb2 /dev/sdb3.
If it weren't for RAID, I could open /dev/sdb2 the main partition) in Disk Utility and enter my encryption passphrase to get access. But RAID adds a layer of obstruction that I have not yet overcome. I used mdadm to scan the above partitions and created the /etc/mdadm.conf file, which I edited to show the 2nd drive as missing (rather than risk corrupting both drives). I activated the RAID array with mdadm, and cat shows:
I've been searching the web for hours but have yet to find someone with a solution to this situation. If anyone has a thought on how to access this disk I'd be pleased to hear from you. In the meantime I will start building a new (9.10) machine from scratch, without RAID, 'cos that's probably going to be necessary.
I installed Fedora 12 and performed the normal updates. Now I can't reboot and get the following console error message.
ERROR: via: wrong # of devices in RAID set "via_cbcff jdief" [1/2] on /dev/sda ERROR: removing inconsistent RAID set "via_cbcff jdief" ERROR: no RAID set found No root device found Boot has failed, sleeping forever.
After having installed Ubuntu several days after its release, I have enjoyed the OS entirely except for one issue: zebra stripe crashes. When my desktop (stats are in my signature) is left idle, occasionally it will begin flashing from the blank navy blue screen (the screen saver when no screen saver is set) to zebra stripes which can be seen at the top right corner of the screen. When I attempt to move the mouse or even attempt to do remotely anything, the stripes and blank screen continue flashing regardless. I end up having to terminate my session by holding down the power button. I have installed all updates to date. How can this be stopped?
If I have a windows installed in raid-0, then install virtualbox and install all my linux os,s to virtualbox will they be a raid-0 install without needing to install raid drivers?
come across with this issue for one server we are giving support please,help me to know why this issues occurs and alsothat i want a command to display power supplies for the server.please,let me know if anybody is familiar with these kind of issues.
Installing Ubuntu 10.10 desktop.on a Highpoint rocketraid 2642.Installing Ubuntu, it does not find the drive?How do I install the drivers to install and boot after the installation from the raid drives?
I've been using Ubuntu since 6.04, not a linux guru by any means but can usually get myself out of trouble.
So, the issue... I installed 8.04 on my mother's laptop some time back as she was having trouble keeping Win 2000 running (she has a low spec no-name generic laptop btw). I live in Autsralia and she in England, i thought i could better support her if she was running Ubuntu... All has been well with 8.04 but with the new LTS release (10.04) i thought i'd talk her through the upgrade... Did the online upgrade via Update Manager, all seemed to go OK but when she went to log on after restart she got an error saying 'authentication failed' even though we are 100% sure we have the user name and password correct... Tried to do a ctrl+alt+F1 to by pass the GUI log on but couldn't get the terminal session to open up, just got a black screen - no command prompt.
So... thought OK re-install... downloaded the ISO, burned and sent her a CD in the post... talked her through the re-install, all seemed to go well (again) - BUT, after restart, couldn't log in "authentication failed" again...
So, remember i'm trying to talk a novice through all this... any thoughts!?!
If i can get her to log in i can then support her via some kind of remote sesion or other screen share... and it wont cost me a small fortune in international calls!!! But if i can't get a log in, i'm dead in the water!?
I already have a 300 GB SATA drive with Ubuntu 8.04 installed on it. It is currently running off my mobo's onboard SATA 1.0 Raid Controller. I recently purchased a SATA 2.0 Raid PCI controller that I will be putting in the computer and 2 new 750 GB Western Digital Caviar Green Hard drives. I wish to add the two drives in a Raid 1 configuration to store all my Pictures, Files, and Movies on. Every instruction and tutorial I can find on setting up Raid on Linux assumes you are performing a fresh install on Linux and gives no tips or instructions for current installations. I already have Ubuntu installed and do not wish to have to reinstall it. I want to leave my installation on the 300 GB drive and just add in the 2 750GB drives for storage.
I have an old Fedora machine setup to use Raid-1. I'm trying to install Ubuntu 10.04 Desktop on it, but the installer can't seem to override the raid partition. I have two 80GB drives, but the "Prepare disk space" screen only shows one 80GB partition called "/dev/mapper/isw_dfafhagdgg_RAID_Volume01". When I selected "Erase and use the entire disk", it gives me the error "The ext4 file system creation in partition #1 of Serial ATA RADI isw_dfafhagdgg_RAID_Volume0 (mirror) failed".
So I tried going back and specifying partitions manually. However, it only shows me the one device /dev/mapper/isw_dfafhagdgg_RAID_Volume01, and I can't delete it to get my original two hard drives, so I can recreate a RAID-1 setup. What am I doing wrong? I thought Ubuntu supported software RAID-1?
I have a system with RAid 0 with Windows 7. I want to load Ubuntu on a stand alone disk. I have two disks obviously for my raid o, two storage disks and an extra disk for ubuntu. I have tired to install unbuntu but upon start up I get an error on the disk. I than had to use grub to repair my mbr. My question is how do I make this work where I can have a dual boot system. I have been running ubuntu on my laptop .
I am trying to install Ubuntu 10.10 desktop alongside windows 7 on my 2TB Nvidia Raid 0 HDD's but when i select where to install the OS on the ubuntu installation it sees through the raid and only shows the HDD's and no partitions. is there any way to install ubuntu without having to take off Raid?
I am new to Linux but have installed debian using the 'net method. The computer, in general, works fine. I have a USB keyboard which works OK and a Magnetic Stripe Reader which the OS thinks is another keyboard. If I open a terminal then swipe a credit card the data are transfered to the terminal screen at the cursor which I do not want.
How do I prevent the OS from grabbing the MSR? I want to use a program to address it as "/dev/hiddev0" or similar for read and write. I have tried "blacklist hiddev0" in /etc/modprobe.d/blacklist.conf but to no avail.Regarding these forums, I, for the life of me, cannot understand why a search for USB turns up empty when I can see the term in several titles on the first page of topics.
I have installed Ubuntu on my m1530 since 8.04 and currently dual boot Win7 and 10.10. I would like to dual boot on my PC, but I have run into a problem. I am not a pro at Ubuntu, but this problem I can not solve by reading forums like I have in the past.
I realize this is a common problem, but I have noticed people having success.
I have a M4A87TD EVO MB with two Seagate drives in Raid 0. (The raid controller is a SB850 on that MB) I use the raid utility to create the raid drive that Windows7x64 uses. I have 2 partitions and 1 unused space. Partition 1 is Windows, partition 2 is for media, and the remaining unused space is for Ubuntu.
I am running ubuntu-10.10-desktop-amd64 off a Cruzer 16GB flash drive that was installed via Universal-USB-Installer-1.8.1.4.
My problem like so many others is that when I load into Ubuntu, gparted detects two separate hard drives instead of the raid. I read that this is because kpartx is not installed on 10.10. I then went in LiveCD mode and downloaded kpartx from Synaptic Manager. Gparted still reported two drives. I opened terminal and run a few commands with kpartx. I received an error. (Forgive me I didn't write it down, but I believe it said something about a communication error. I will try again later and see.)
Currently I am reflashing the Cruzer with a persistence of 4GB. I am not familiar with this process, but I understand that my LiveCD boot will save information I download to it. I decided to try this method because I was going to install kpartx and reboot to see if this made a difference.
I am looking for any suggestions on a different method or perhaps someone to tell me that the raid controller or some hardware isn't supported. I did install ubuntu-10.10-alternate-amd64 on my flash drive, but fail to get past detecting my CD-ROM drive since it's not plugged in. If this method is viable, I will plug it in. I also watched the ..... video were a guy creates Raid 0 with the alternated CD, but it wasn't a dual boot and didn't use a raid controller from a MB.
First off I'd like to say I'm very new to Ubuntu, so I'm still trying to learn.I have a K8 motherboard with an adaptec U320 SCSI card with RAID ability.To that card I connected two 15k RPM 35 GB Maxtor SCSI drives.For some reason I'm not able to install Ubuntu 9.10 onto these drives with both drives in RAID 0.With both drives separately configured Ubuntu doesn't even see them.I have by the way run Windows XP and 2000 succesfully on these drives in Raid 0 configuration.I set up the array in the card's bios as bootable with write cache enabled.The system's bios sees the array as the array to boot from.Ubuntu (both standard and alternate) sees the array and I have tried to install Ubuntu on it by manually partitioning it or having me guide it with or without LVM.I tend to delete and rebuild the array between attempts so I have a clean slate to start from every time I try.
I have no other drives (except the CD of course) installed on this computer.The whole installation goes very well untill the end where I get a message that it could not install the boot loader (grub?).Every single time I've tried to install Ubuntu in all sorts of ways onto my RAID 0 array I have run into problems installing that boot loader, and I've tried that card and those disks in another computer as well.Tomorrow I'd like to try to manually set up the partitions with a small /boot partition on a standard hard drive with / on the array, but if somebody please has any idea's on how I might get it working without having to rely on another hard disk (which might not even work of course)
I am in one of those terrible situations...My motherboard died and I am left with a nvidia nforce raid stripe set that I need to get data off. I guess I should have setup that backup regime. Some search results have suggested that dmraid may be my white knight. I have pulled the data off each of the disks (3 of them) into image files (using ddrescue) and created loop devices (/dev/loop[1-3], using losetup).
I am running 32bit ubuntu 9.10 btw. When I do a "dmraid -ay" it tells me it found a raid set but has not activated it. When I do a "dmraid -r" it tells me I have a raid5 array on one of my /dev/sd? devices. dmraid seems to be ignoring my loop devices . Does anyone know if dmraid actually works with loop devices? If it does is there a way for me to point it directly at the devices and get it to do its auto-magic?
I'm running a triple-core AMD 32-bit CPU, with two matched hard drives in a software RAID 1 configuration. The system is single-boot, running Hardy only. I have no Windoze compatibility issues to impede me. In some recent posts, I discussed my desire to add a few non-standard applications to Hardy. Well, it hasn't been working for me. I've succeeded in breaking my standard Python 2.5 installation, and the Python 2.6 that I was trying to install is also broken. After asking questions in various Python forums and not getting answers, I'm starting to think about my alternatives.
I have backed up all my hard-drive data, and downloaded Karmic. I'm running from the Live CD. I am considering a clean install, though of course I would save a lot of time if I could just upgrade.
Before I leap, I see one possible problem: Karmic has failed to mount any of my hard drive partitions, as RAID or otherwise. Should I worry? When I upgraded from Dapper to Hardy on an older (non-RAID) machine, I recall that my hard disk mounted from the Live CD just fine.
Also, am I correct in understanding that Karmic is RAID-aware right out of the box? I'm wondering if I'll have to set it up manually again. That took me a while. By the way, I didn't set up separate partitions for boot, root and home (stupid me). Can I do that after the fact during an upgrade?
UPDATE: decided to reinstall and run the partitioner to get rid of the raid. Not worth dealing with this since seems to be lower level as /dev/mapper was not listing any devices. Error 15 at grub points to legacy grub. So avoiding the problem by getting rid of raid for now. So ignore this post. Found a nice grub2 explanation on the wiki but didn't help this situation since probably isn't a grub problem. Probably is a installer failure to map devices properly when it only used what was already available and didn't create them during the install. I don't know, just guessing. Had OpenSuSE 10.3 64bit installed with software raid mirrored swap, boot, root. Used the alternate 64bit Ubuntu iso for installation. Since partitioning was already correctly setup and the raid devices /dev/md0,1,2 were recognized by the installer, I chose to format the partitions with ext3 and accept the configuration:
Installation process failed at the point of installing grub. It had attempted to install the bootloader on /dev/sda2 and /dev/sdb2. I moved on since it would not let me fiddle with the settings and I got the machine rebooted with the rescue option on the iso used for installing. Now, I can see the root partition is populated with files as expected. dpkg will list that linux-image-generic, headers, and linux-generic are installed with other supporting kernel packages. grub-pc is installed as well. However, the /boot partition or /dev/md1 was empty initially after the reboot. What is the procedure to get grub to install the bootloader on /dev/sda2 and /dev/sdb2, which represent /dev/md1 or /boot?
Running apt-get update and apt-get upgrade installed a newer kernel and this populated the /boot partition. Running update-grub results in a "/usr/sbin/grub-probe: error: no mapping exists for 'md2'". grub-install /dev/md2 or grub-install /dev/sda2 gives the same error as well. Both commands indicate that "Autodetection of a filesystem module failed, Please specify the module with the option '--modules' explicitly". What is the right modules that need to be loaded for a raid partition in initrd? Should I be telling grub to use the a raid module?
I need to add the LSI drivers for the 9750 RAID controller during the install. These drivers are not included in 10.04 (or 10.04.1) and I need to install onto the RAID device I've created. LSI provides the drivers and instructions here - [URL]
Here are my steps, with the drivers on a USB drive - Code: Boot from the installation CD and select Install Ubuntu Server. Press CTRL+ALT+F2 to switch to console 2 while Ubuntu detects the network. # mkdir /mnt2 /3ware # mount /dev/sda1 /mnt2 NOTE: LSI drivers are at /dev/sda1, via USB # cp /mnt2/9750-server.tgz /3ware # cd /3ware ; tar zxvf 9750-server.tgz # umount /mnt2
* Remove the USB flash before insmod command * # insmod /3ware/2.6.32-21-generic/3w-sas.ko
Press CTRL+ALT+F1 to return to the installer. Continue the installation as usual. Do not reboot when the installation is complete. Press CTRL+ALT+F2 to switch to console 2 again.
Press CTRL+ALT+F1 to return to the installer. Reboot to complete the installation. There are no errors, but after I reboot I just get "GRUB" in the upper left corner, nothing else.
I tried installing Ubuntu 10.04 WS on my PC but it did not see any disks to install on. I believe this is because my drives are all configured as RAID. My mobo is an Asus M3A78-EMH HDMI AM2+ socket with an Athlon 2X 5000+ CPU. The chipset is AMD 780G. I have the BIOS configured for RAID drives and I already run Win XP x32 and Win 7 x64 on it. My boot drive is configured as 'RAID READY' and I have 2 RAID 1 disks consisting of pairs of SATA drives.
From what I have researched it seems that with some tuning it should be possible to install Ubuntu 10.04 but I have little Linux experience and don't want to mess up my existing drives. I have installed Linux before a few times and run it but never with RAID. Is anyone aware of an existing disk image that I will be able to install from on my system or would it be possible for someone to create one for me to use?
I'm trying to switch to a new RAID5 array but can't get it to boot. My disks:/dev/sda: new RAID member
/dev/sdb: Windows disk /dev/sdc: new RAID member /dev/sdd: old disk, currently using /dev/sdd3 as /
The RAID array is /dev/md0, which is comprised of /dev/sda1 and /dev/sdc1. I have copied the contents of /dev/sdd3 to /dev/md0, and can mount /dev/md0 and chroot into it. I did this:
Code:
sudo mount /dev/md0 /mnt/raid sudo mount --bind /dev /mnt/raid/dev sudo mount --bind /proc /mnt/raid/proc
[code]....
This completes with no errors, and /boot/grub/grub.cfg looks correct[EDIT: No it doesn't. It has root='(md/0)' instead of root='(md0)']. For example, here's the first entry:
Code:
### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.35-25-generic' --class ubuntu --class gnu-linu x --class gnu --class os {
I am currently with Wubi 10.04 under Vista and my Dell XPS 630i has a 1 TB Nvidia RAID controller.First image (Option A) suggests /dev/sda as device for boot loader installation, while second image (Option B) suggests /dev/mapper/nvidia_bcidhdja.I think that the way of keeping the RAID would be using Option B as the device for boot loader installation. Would Option A break the RAID instead?
I just had a whole 2TB Software RAID 5 blow up on me. I rebooted my server, which i hardly ever do and low and behold i loose one of my raid 5 sets. It seems like two of the disks are not showing up properly.. What i mean by that is the OS picks up the disks, but it doesnt see the partitions.
I ran smartct -l on all the drives in question and they're all in good working order.
Is there some sort of repair tool i can use to scan the busted drives (since they're available) to fix any possible errors that might be present.
Here is what the "good" drive looks like when i use sfdisk:
Quote:
sudo sfdisk -l /dev/sda Disk /dev/sda: 121601 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sda1 0+ 121600 121601- 976760001 83 Linux /dev/sda2 0 - 0 0 0 Empty
I have created a system using four 2Tb hdd. Three are members of a soft-raid mirrored (RAID1) with a hot spare and the fourth hdd is a lvm hard drive separate from the RAID setup. All hdd are gpt partitioned.
The RAID is setup as /dev/md0 for mirrored /boot partations (non-lvm) and the /dev/md1 is lvm with various logical volumes within for swap space, root, home, etc.
When grub installs, it says it installed to /dev/sda but it will not reboot and complains that "No boot loader . . ."
I have used the supergrubdisk image to get the machine started and it finds the kernel but "grub-install /dev/sda" reports success and yet, computer will not start with "No boot loader . . ." (Currently, because it is running, I cannot restart to get the complete complaint phrase as md1 is syncing. Thought I'd let it finish the sync operation while I search for answers.)
I have installed and re-installed several times trying various settings. My question has become, when setting up gpt and reserving the first gigabyte for grub, users cannot set the boot flag for the partition. As I have tried gparted and well as the normal Debian partitioner, both will NOT let you set the "boot flag" to that partition. So, as a novice (to Debian) I am assuming that "boot flag" does not matter.
Other readings indicate that yes, you do not need a "boot flag" partition. "Boot flag" is only for a Windows partition. This is a Debian only server, no windows OS.
I am doing a new install and have 4 drives (2x500Gb and 2x2Tb). What I want is the OS on the 2x500Gb and the data on the 2Tb drives. The idea is to make the 500's one RAID 1 set and the 2Tb a RAID 1 set. I think the installer is trying to build the RAID set for the OS but the root is looking like a RAID 0 rather than 1. Is there some way to specify?
Just had my Raid 5 config on Sme server 7.4 crash due to a faulty drive, I have manage to setup a machine with ubuntu 9.04 and installed the drives in there original config (I haven't overwritten them). Ubuntu can see the raid config but for some reason I cannot mount it as it keeps asking for the filesystem which if I check says LVM2_member, anyone know how I could get it to mount?
I'm running Karmic Server with GRUB2 on a Dell XPS 420. Everything was running fine until I changed 2 BIOS settings in an attempt to make my Virtual Box guests run faster. I turned on SpeedStep and Virtualization, rebooted, and I was slapped in the face with a grub error 15. I can't, in my wildest dreams, imagine how these two settings could cause a problem for GRUB, but they have. To make matters worse, I've set my server up to use Luks encrypted LVMs on soft-RAID. From what I can gather, it seems my only hope is to reinstall GRUB. So, I've tried to follow the Live CD instructions outlined in the following article (adding the necessary steps to mount my RAID volumes and LVMs). [URL]
If I try mounting the root lvm as 'dev/vg-root' on /mnt and the boot partition as 'dev/md0' on /mnt/boot, when I try to run the command $sudo grub-install --root-directory=/mnt/ /dev/md0, I get an errors: grub-setup: warn: Attempting to install GRUB to a partition instead of the MBR. This is a BAD idea. grub-setup: error: Embedding is not possible, but this is required when the root device is on a RAID array or LVM volume.
Somewhere in my troubleshooting, I also tried mounting the root lvm as 'dev/mapper/vg-root'. This results in the grub-install error: $sudo grub-install --root-directory=/mnt/ /dev/md0 Invalid device 'dev/md0'
Obviously, neither case fixes the problem. I've been searching and troubleshooting for several hours this evening, and I must have my system operational by Monday morning. That means if I don't have a solution by pretty early tomorrow morning...I'm screwed. A full rebuild will by my only option.