Ubuntu Installation :: Server Detects But Can't Partition Hardware RAID 5
Jul 13, 2010
I am trying to install Ubuntu Server 10.04 on a home server I am making. I have 3 1-TB drives set up in RAID 5 via my mobo (ASUS M2NPV-VM). The installation detects that I have a RAID array, but when it goes to partition, all it shows me is the usb stick I am installing from.
EFI GUID Partition support works on both 32bit and 64bit platforms. You must include GPT support in kernel in order to use GPT. If you don't include GPT support in Linux kernel, after rebooting the server, the file system will no longer be mountable or the GPT table will get corrupted. By default Redhat Enterprise Linux / CentOS comes with GPT kernel support. However, *******if you are using Debian or Ubuntu Linux,******** you need to recompile the kernel. Set CONFIG_EFI_PARTITION to y to compile this feature.
Is this true? Ubuntu has no GPT support native to the server install? Never compiled a kernel. Is that of itself going to be a mind bender? How much doo doo am I going to get into if I haven't done it a few times? Trust me when I tell you its no thrill formatting and reformatting a 8tb raid drive or the individual disks if it screws up. Been on that ride way too long already. Need things to go smoothly. This is not a play toy server, but will be used in a business.
I'm running Karmic Server with GRUB2 on a Dell XPS 420. Everything was running fine until I changed 2 BIOS settings in an attempt to make my Virtual Box guests run faster. I turned on SpeedStep and Virtualization, rebooted, and I was slapped in the face with a grub error 15. I can't, in my wildest dreams, imagine how these two settings could cause a problem for GRUB, but they have. To make matters worse, I've set my server up to use Luks encrypted LVMs on soft-RAID. From what I can gather, it seems my only hope is to reinstall GRUB. So, I've tried to follow the Live CD instructions outlined in the following article (adding the necessary steps to mount my RAID volumes and LVMs). [URL]
If I try mounting the root lvm as 'dev/vg-root' on /mnt and the boot partition as 'dev/md0' on /mnt/boot, when I try to run the command $sudo grub-install --root-directory=/mnt/ /dev/md0, I get an errors: grub-setup: warn: Attempting to install GRUB to a partition instead of the MBR. This is a BAD idea. grub-setup: error: Embedding is not possible, but this is required when the root device is on a RAID array or LVM volume.
Somewhere in my troubleshooting, I also tried mounting the root lvm as 'dev/mapper/vg-root'. This results in the grub-install error: $sudo grub-install --root-directory=/mnt/ /dev/md0 Invalid device 'dev/md0'
Obviously, neither case fixes the problem. I've been searching and troubleshooting for several hours this evening, and I must have my system operational by Monday morning. That means if I don't have a solution by pretty early tomorrow morning...I'm screwed. A full rebuild will by my only option.
I found a workaround of sorts. It looks like this is related to a 9.04 bug [URL] and the loopback workaround brings back the array. It is not clear how I will handle this long term.
Note: before using this technique, I used gparted to tag the partitions as "raid". They disappeared again on reboot, so I had to do it again. I am not sure how this is going to work out long-term
Note: I suspect some of this is related to the embedded "HOMEHOST" that is written into the RAID metadata on the paritions. The server was misnamed when first built and the name was changed later (cerebus -> cerberus) and the old name has surfaced in the name of a phanton device reported by gparted - /dev/mapper/jmicron_cerebus_root
I have a mythbuntu 9.10 system that I have upgraded from 8.10 to 9.04 to 9.10 in the last 2 days. I am on my way to 10.x, but need to make sure it works after every step.
The basic problem is that in its current incarnation, it is not recognizing the underlying partitions for one of the RAID devices, and therefore not happy.
As a 8.10 system I had 2 raid devices:
/dev/md16 -> /dev/sda5 and /dev/sdb5 /dev/md21 -> /dev/sdc1 and /dev/sdd1 /etc/fstab looked like this (in part): /dev/md16 /var/lib xfs defaults 0 2
[Code]....
I don't *really* want to repartition the drive as there is a small amount of data loss between recent backups and what is on the drive, plus it would take me 2 days to move the data back.
The ext4 file system creation in partition #1 of Serial ATA RAID isw_fjidifbhi_Volume0 (mirror) failed. Not sure what's wrong? How do I solve this? This happens after I enter my login details and click next. Then it fails on the install screen with this error.
I'm trying to resize an NTFS partition on an IBM MT7977 Server. It has a Adaptec AIC-9580W RAID controller. I was thinking about doing it with a gparted LiveCD/LiveUSB, but then I realised that they won't have drivers for the RAID controller. A quick google for "9580W Linux" doesn't return anything promising.
I tried to install ubuntu 10.10 Lynx through livecd , but the partitions in my 160gb western digital hard disk is shown as unallocated free space . Through the linux terminal when I fire the command sudo fdisk -l it does not display anything. In my current HD I have installed windows XP successfully.
Sadly, I google'd the heck out of this and even the vmware community has no answers. So, I decided to turn to the experts and see if you can provide a better solution than [URL]. The system sees both CPU's and all of the cores (cat /proc/cpuinfo). I have tried with HT and without HT (I never run HT anymore; seems to hurt performance in my workloads). I've tried the recent and newest kernels for CentOS 5.4 and still have no luck. There seems to be lots of people having this issue, but no real solutions.
Dell PE R710 with 2x Xeon CPU X5550 kernel 2.6.18-164.11.1.el5 #1 SMP x86_64 Fresh CentOS 5.4 install
I can't seem to get past step 6 of he installation of Ubuntu 10.04. I get the error: The ext4 file system creation failed... on single partition (no raid). I chose ' / ' as the mount point, and have tried with and without a swap drive. I'm installing on a Sony VAIO VGN-NS160D, and the HDD was previously formatted to NTFS. There's no other OS so I don't see any way of getting a command line to try a sudo fdisk..
Q1) I was wondering if it is possible to Dual boot Ubuntu with Windows XP on a 1TB RAID-0 setup ?
Q2) Also, is it possible to create a SWAP partition (for Ubuntu) on a NON RAID-0 HDD ?
Q3) Lastly... I read GRUB2 is the default boot manager... should I use that, or GRUB / Lio ?
I have a total of 3 HDDs on this system: -- 2x 500GB WDD HDDs (non-advanced format) ... RAID-0 setup -- 1x 320GB WDD HDD (non RAID setup) (The non RAID HDD is intended to be a SWAP drive for both XP and Ubuntu = 2 partitions)
I plan on making multiple partitions... and reserve partition space for Ubuntu (of course).
I have the latest version of the LiveCD created already.
Q4) Do I need the Alternate CD for this setup?
I plan on installing XP before Ubuntu.
This is my 1st time dual booting XP with Ubuntu.
I'm using these as my resources: - [url] - [url]
Q5) Anything else I should be aware of (possible issues during install)?
Q6) Lastly... is there anything like the AHCI (advanced host controller interface) like in Windows for Ubuntu?
(Since I need a special floppy during Windows Install...) I want to be able to use the Advanced Queuing capabilities of my SATA drives in Ubuntu.
I made a bootable USB drive using Universal USB Installer on Windows 7. When I try to boot from it, my computer never detects it, however it did detect the USB DVD drive. How can I check if the USB drive is actually bootable?
I have 6GB of RAM and I'm planning to install Fedora 14 32-bit to achieve a higher degree of compatibility. Does fedora automatically download and install a PAE enabled kernel when it detects more than 4GB of RAM (Just like Ubuntu)?
I recently assembled a small pc with two 2TB harddisks to set up as a home server,and I figured I'd install Ubuntu Server 10.04 on it.As simple as that sounds I'm pretty close to restructuring the machine with a kitchen knife . The data on the server will mainly consist of backups so I'd like the disks configured in RAID 1, a hardware RAID controller was way over budget/overkill since software raid seemed perfectly sufficient for this purpose. So I downloaded the 64-bit server installer for 10.04, put it on an usb-stick (no optical drive in the machine) using the usb-creator tool.
Everything goes perfectly well up to the partition editor. I try to create the partitions I want for the RAID configuration, but I cannot switch the boot-flag to on when I select "physical volume for raid". When trying to switch it briefly shows a progress bar and it's still set to off. I got this problem in both the 8.10 and 10.04 installers (I figured I'd try configuring in a different version, but no luck). Right now I just configured the boot partition (and several others) to be on one disk and the data partition the only one configured as RAID-1, which took ages to apply (I actually thought that had crashed too when I started this topic..) but this is far from ideal since I won't be able to boot from disk if one of them fails and it might be a pain to recover the data. how do I make a "physical volume for raid" bootable in the installer?
I booted Hardy, because Karmic detects no screen, after trying to adjust to a previously recognized resolution. As good as it is, does it seem like some basic computer functions just do NOT improve?
How long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?
I'm trying to install a box with Ubuntu 10.04 LTS server with a typical LAMP system in order to replace my old Ubuntu 8.04 server I have at my school to have a "backup" system and also trying to replace NIS authentication with LDAP. Well I'm getting stuck on the first step: installation of base system. I want to build a RAID-1 system with the two 320GB SATA HD's the machine has, I have a little experience in installing RAID because I installed my old 8.04 server with RAID-1 aswell. I boot my box from an USB stick with Ubuntu 10.04 Server 64bit, the systems boots well, asks me about language, keyboard and so, finds the two NIC cards and the DHCP configuration of one of the card is done.
Then it starts the partitioner. One of the HD's already contains three partitions with an installation of a regional flavour of Ubuntu, the other HD only contains a partition for backups, I don't want to preserve all this stuff, so the first thing I do is to replace the partition tables of both HD's with a new one. This is done without problems. Then I go to the first disk, by example, and create a new partition, the partitioner ask me for the size, I write 0.5 GB (or 500 MB), then I select that it has to be a Primary partition at the beginning of the disk, all goes OK.
Once created I go to the "Use as: " line, type Enter and select the option "Partition for RAID volume", when I hit Enter the error appears: the screen flickers in black for a second or two, then it shows a progress bar "Starting the partitioner..." That always get stuck at 47%!!! Sometimes the partitioner allows me progress a little further (for ex. lets me activate the boot bit of the partition, or it allows me to make another partiton, even once the error didn't appeared until the first partition of the second HD!!) but it always get stuck with the same progress bar at the 47%.
I've tried a lot of things: I downloaded again the ISO and rebuilt the USB, same result. Downloaded the ISO and rebuilt the USB from another computer, same result. Unplugged all the SATA and IDE drives except the two HD's, same result. Built a CD-ROM instead of an USB, same result. Downloaded the 10.10 server ISO (not an LTS), and the USB stick can't boot, is another error, but only to try.
When the error appears, I hit Ctrl+Alt+F2 and get into a root prompt, there I kill two processes: /bin/partman and /lib...don'tremember/init.d/35... and then when I return to the first console the progress bar has gone and the install process asks me at wich step I want to return, I hit "Partition disk" and then the progress bar reappears and Stucks immediately at 47%!!! Is the Ubuntu 10.04 LTS server installer wrong?
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
I'm trying to install Lucid server (64 bit) and I'm having trouble getting it to boot from software RAID. The hardware is an old Gateway E-4610D (1.86GHz Core2 Duo, 2GB RAM, 2 500GB HDs).
If I install on a single HD, it works fine, but if I set up RAID1 as described here, the install completes fine, but on reboot this is what I get:
Code: mount: mounting /dev/disk/by-uuid/***uuid snipped*** on /root failed: Invalid argument mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory
[Code].....
When I set up the RAID using the ubuntu installer, I created identical main and swap partitions on each of the two drives (all four are set as primary), made sure the bootable flag was on on the two main partitions, then created a RAID1 out of the two main partitions, and another RAID1 out of the two swap partitions.
I just got my new server today (a Dell Poweredge 2850), and I'm trying to install the latest version of Ubuntu on it, however it fails to detect my hard drives. They are in a raid array (raid 1 I believe). I've never worked with a raid array before, and I'm wondering, is there anything special I have to do to install Ubuntu on one?
I have installed Ubuntu on my m1530 since 8.04 and currently dual boot Win7 and 10.10. I would like to dual boot on my PC, but I have run into a problem. I am not a pro at Ubuntu, but this problem I can not solve by reading forums like I have in the past.
I realize this is a common problem, but I have noticed people having success.
I have a M4A87TD EVO MB with two Seagate drives in Raid 0. (The raid controller is a SB850 on that MB) I use the raid utility to create the raid drive that Windows7x64 uses. I have 2 partitions and 1 unused space. Partition 1 is Windows, partition 2 is for media, and the remaining unused space is for Ubuntu.
I am running ubuntu-10.10-desktop-amd64 off a Cruzer 16GB flash drive that was installed via Universal-USB-Installer-1.8.1.4.
My problem like so many others is that when I load into Ubuntu, gparted detects two separate hard drives instead of the raid. I read that this is because kpartx is not installed on 10.10. I then went in LiveCD mode and downloaded kpartx from Synaptic Manager. Gparted still reported two drives. I opened terminal and run a few commands with kpartx. I received an error. (Forgive me I didn't write it down, but I believe it said something about a communication error. I will try again later and see.)
Currently I am reflashing the Cruzer with a persistence of 4GB. I am not familiar with this process, but I understand that my LiveCD boot will save information I download to it. I decided to try this method because I was going to install kpartx and reboot to see if this made a difference.
I am looking for any suggestions on a different method or perhaps someone to tell me that the raid controller or some hardware isn't supported. I did install ubuntu-10.10-alternate-amd64 on my flash drive, but fail to get past detecting my CD-ROM drive since it's not plugged in. If this method is viable, I will plug it in. I also watched the ..... video were a guy creates Raid 0 with the alternated CD, but it wasn't a dual boot and didn't use a raid controller from a MB.
I want to create a file-server with Ubuntu and have two additional hard drives in a RAID 1 setup. Current Hardware: I purchased a RAID controller from [URL]... (Rosewill RC-201). I took an old machine with a 750GB hard drive (installed Ubuntu on this drive). I installed the Rosewill RAID card via PCI port. Connected two 1TB hard drives to the Rosewill raid card. Went into the RAID bios and configured it to RAID 1.
My Problem: When I boot into Ubuntu and go to the hard drive utility (I think that's what its called). I see the RAID controller present with two hard drives configured separately. I format and tried varies partition combination and at the end of the day I see two separate hard drives. Just for giggles, I also tried RAID 0 to see if they would combine the drives.
I want to upgrade from another distro to ubuntu server for a few reasons. The only problem is I have a lot of data that needs to survive. here is how my computer is setup. I've 5 drives on the computer,
A- 10gb drive for OS and swap only, no data
B,C,D,E - 4x 500 GB drives in a LVM. they make up one large drive with xfs and this volume has about 1.2 TB of data. there is nothing fancy on it, no encryption and no software raid of course the little 10gb drive can be formatted no problem, but the LVM needs to be migrated over intact.
I just installed Debian 5.0.4 successfully. I want to use the PC as a File Server with two Drives configured as a RAID 1 device. Everything with the RAID device works fine, the only question I have belogs to the GRUB 0.97 Booloader. I would like to be able to boot my Server even if one of the disks fail or the filesystem containing the OS becomes corrupt, so I configured only the data partitions to be a RAID 1 device, so on the second disk should be a copy of the last stable installation, similar to this guide:[URL]...
I'm attempting to install F13 on a server that has a 2-disk RAID setup in the BIOS. When I get to the screen where I select what drive to install on, there are no drives listed. The hard drives were completely formatted before starting the 13 installation. Do I need to put something on them before Fedora will install?
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I am looking to convert a raid 1 server I have to raid 10. It is using software raid, currently I have 3 drives in raid 1. Is it possible to boot into centos rescue, stop the raid 1 array. Then create the raid 10 with 4 drives, 3 of which still has the raid 1 metadata, will mdadm be able to figure it out and resync properly keeping my data? or is there a better way to do it?