Ubuntu Installation :: Server 10.04 With Software RAID?
Sep 17, 2010
I recently assembled a small pc with two 2TB harddisks to set up as a home server,and I figured I'd install Ubuntu Server 10.04 on it.As simple as that sounds I'm pretty close to restructuring the machine with a kitchen knife . The data on the server will mainly consist of backups so I'd like the disks configured in RAID 1, a hardware RAID controller was way over budget/overkill since software raid seemed perfectly sufficient for this purpose. So I downloaded the 64-bit server installer for 10.04, put it on an usb-stick (no optical drive in the machine) using the usb-creator tool.
Everything goes perfectly well up to the partition editor. I try to create the partitions I want for the RAID configuration, but I cannot switch the boot-flag to on when I select "physical volume for raid". When trying to switch it briefly shows a progress bar and it's still set to off. I got this problem in both the 8.10 and 10.04 installers (I figured I'd try configuring in a different version, but no luck). Right now I just configured the boot partition (and several others) to be on one disk and the data partition the only one configured as RAID-1, which took ages to apply (I actually thought that had crashed too when I started this topic..) but this is far from ideal since I won't be able to boot from disk if one of them fails and it might be a pain to recover the data. how do I make a "physical volume for raid" bootable in the installer?
How long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?
I'm trying to install a box with Ubuntu 10.04 LTS server with a typical LAMP system in order to replace my old Ubuntu 8.04 server I have at my school to have a "backup" system and also trying to replace NIS authentication with LDAP. Well I'm getting stuck on the first step: installation of base system. I want to build a RAID-1 system with the two 320GB SATA HD's the machine has, I have a little experience in installing RAID because I installed my old 8.04 server with RAID-1 aswell. I boot my box from an USB stick with Ubuntu 10.04 Server 64bit, the systems boots well, asks me about language, keyboard and so, finds the two NIC cards and the DHCP configuration of one of the card is done.
Then it starts the partitioner. One of the HD's already contains three partitions with an installation of a regional flavour of Ubuntu, the other HD only contains a partition for backups, I don't want to preserve all this stuff, so the first thing I do is to replace the partition tables of both HD's with a new one. This is done without problems. Then I go to the first disk, by example, and create a new partition, the partitioner ask me for the size, I write 0.5 GB (or 500 MB), then I select that it has to be a Primary partition at the beginning of the disk, all goes OK.
Once created I go to the "Use as: " line, type Enter and select the option "Partition for RAID volume", when I hit Enter the error appears: the screen flickers in black for a second or two, then it shows a progress bar "Starting the partitioner..." That always get stuck at 47%!!! Sometimes the partitioner allows me progress a little further (for ex. lets me activate the boot bit of the partition, or it allows me to make another partiton, even once the error didn't appeared until the first partition of the second HD!!) but it always get stuck with the same progress bar at the 47%.
I've tried a lot of things: I downloaded again the ISO and rebuilt the USB, same result. Downloaded the ISO and rebuilt the USB from another computer, same result. Unplugged all the SATA and IDE drives except the two HD's, same result. Built a CD-ROM instead of an USB, same result. Downloaded the 10.10 server ISO (not an LTS), and the USB stick can't boot, is another error, but only to try.
When the error appears, I hit Ctrl+Alt+F2 and get into a root prompt, there I kill two processes: /bin/partman and /lib...don'tremember/init.d/35... and then when I return to the first console the progress bar has gone and the install process asks me at wich step I want to return, I hit "Partition disk" and then the progress bar reappears and Stucks immediately at 47%!!! Is the Ubuntu 10.04 LTS server installer wrong?
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
I'm trying to install Lucid server (64 bit) and I'm having trouble getting it to boot from software RAID. The hardware is an old Gateway E-4610D (1.86GHz Core2 Duo, 2GB RAM, 2 500GB HDs).
If I install on a single HD, it works fine, but if I set up RAID1 as described here, the install completes fine, but on reboot this is what I get:
Code: mount: mounting /dev/disk/by-uuid/***uuid snipped*** on /root failed: Invalid argument mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory
[Code].....
When I set up the RAID using the ubuntu installer, I created identical main and swap partitions on each of the two drives (all four are set as primary), made sure the bootable flag was on on the two main partitions, then created a RAID1 out of the two main partitions, and another RAID1 out of the two swap partitions.
I am trying to install Ubuntu Server 10.04 on a home server I am making. I have 3 1-TB drives set up in RAID 5 via my mobo (ASUS M2NPV-VM). The installation detects that I have a RAID array, but when it goes to partition, all it shows me is the usb stick I am installing from.
I just got my new server today (a Dell Poweredge 2850), and I'm trying to install the latest version of Ubuntu on it, however it fails to detect my hard drives. They are in a raid array (raid 1 I believe). I've never worked with a raid array before, and I'm wondering, is there anything special I have to do to install Ubuntu on one?
I have installed Ubuntu on my m1530 since 8.04 and currently dual boot Win7 and 10.10. I would like to dual boot on my PC, but I have run into a problem. I am not a pro at Ubuntu, but this problem I can not solve by reading forums like I have in the past.
I realize this is a common problem, but I have noticed people having success.
I have a M4A87TD EVO MB with two Seagate drives in Raid 0. (The raid controller is a SB850 on that MB) I use the raid utility to create the raid drive that Windows7x64 uses. I have 2 partitions and 1 unused space. Partition 1 is Windows, partition 2 is for media, and the remaining unused space is for Ubuntu.
I am running ubuntu-10.10-desktop-amd64 off a Cruzer 16GB flash drive that was installed via Universal-USB-Installer-1.8.1.4.
My problem like so many others is that when I load into Ubuntu, gparted detects two separate hard drives instead of the raid. I read that this is because kpartx is not installed on 10.10. I then went in LiveCD mode and downloaded kpartx from Synaptic Manager. Gparted still reported two drives. I opened terminal and run a few commands with kpartx. I received an error. (Forgive me I didn't write it down, but I believe it said something about a communication error. I will try again later and see.)
Currently I am reflashing the Cruzer with a persistence of 4GB. I am not familiar with this process, but I understand that my LiveCD boot will save information I download to it. I decided to try this method because I was going to install kpartx and reboot to see if this made a difference.
I am looking for any suggestions on a different method or perhaps someone to tell me that the raid controller or some hardware isn't supported. I did install ubuntu-10.10-alternate-amd64 on my flash drive, but fail to get past detecting my CD-ROM drive since it's not plugged in. If this method is viable, I will plug it in. I also watched the ..... video were a guy creates Raid 0 with the alternated CD, but it wasn't a dual boot and didn't use a raid controller from a MB.
I want to create a file-server with Ubuntu and have two additional hard drives in a RAID 1 setup. Current Hardware: I purchased a RAID controller from [URL]... (Rosewill RC-201). I took an old machine with a 750GB hard drive (installed Ubuntu on this drive). I installed the Rosewill RAID card via PCI port. Connected two 1TB hard drives to the Rosewill raid card. Went into the RAID bios and configured it to RAID 1.
My Problem: When I boot into Ubuntu and go to the hard drive utility (I think that's what its called). I see the RAID controller present with two hard drives configured separately. I format and tried varies partition combination and at the end of the day I see two separate hard drives. Just for giggles, I also tried RAID 0 to see if they would combine the drives.
I want to upgrade from another distro to ubuntu server for a few reasons. The only problem is I have a lot of data that needs to survive. here is how my computer is setup. I've 5 drives on the computer,
A- 10gb drive for OS and swap only, no data
B,C,D,E - 4x 500 GB drives in a LVM. they make up one large drive with xfs and this volume has about 1.2 TB of data. there is nothing fancy on it, no encryption and no software raid of course the little 10gb drive can be formatted no problem, but the LVM needs to be migrated over intact.
I just installed Debian 5.0.4 successfully. I want to use the PC as a File Server with two Drives configured as a RAID 1 device. Everything with the RAID device works fine, the only question I have belogs to the GRUB 0.97 Booloader. I would like to be able to boot my Server even if one of the disks fail or the filesystem containing the OS becomes corrupt, so I configured only the data partitions to be a RAID 1 device, so on the second disk should be a copy of the last stable installation, similar to this guide:[URL]...
I'm attempting to install F13 on a server that has a 2-disk RAID setup in the BIOS. When I get to the screen where I select what drive to install on, there are no drives listed. The hard drives were completely formatted before starting the 13 installation. Do I need to put something on them before Fedora will install?
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I am looking to convert a raid 1 server I have to raid 10. It is using software raid, currently I have 3 drives in raid 1. Is it possible to boot into centos rescue, stop the raid 1 array. Then create the raid 10 with 4 drives, 3 of which still has the raid 1 metadata, will mdadm be able to figure it out and resync properly keeping my data? or is there a better way to do it?
I have a box that doesn't have a Raid controller or a software raid running currently. I would like to make it a RAID 1. Since it seems there isn't any IDE RAID controllers hardly around, I have another HD that is the exact model as the drive currently in the box running CentOS. Can I some how add the second drive and get the box to mirror from here own out? The box gets really hot and I want to be ready for a HD failure.
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
I'm trying to setup RAID 1 on a CentOS 5 server for a zimbra email server.I get a partion schema error. Can I do this?The server is a HP Proliant ML150 G3 server with two 80GB HDD.
Here is my system: I have dell poweredge 1950 PERC 6 with 300 GB raid system. It has two disks of each 300GB RAID mirrored system. I have few applications and data that reached around 280GB. As you know, poweredge 1950 we can have only two disk.
They are not mission critical. Hence, I wanted to remove the raid system and use as a non-raid system. By doing it, The applications and data can grow upto 600GB. I do not want to loose the data and setup. I am not so clear about RAID system and its conversion.
I am running single drive Ubuntu server 9.10 with a lot of software. Now I want to add one more disk (same size and type) and to convert this to RAID 0 without need of reinsallation. Is it possible and if yes how? I didn't find nothing for RAID 0. It sounds simple, but probably is not.
I'm running Karmic Server with GRUB2 on a Dell XPS 420. Everything was running fine until I changed 2 BIOS settings in an attempt to make my Virtual Box guests run faster. I turned on SpeedStep and Virtualization, rebooted, and I was slapped in the face with a grub error 15. I can't, in my wildest dreams, imagine how these two settings could cause a problem for GRUB, but they have. To make matters worse, I've set my server up to use Luks encrypted LVMs on soft-RAID. From what I can gather, it seems my only hope is to reinstall GRUB. So, I've tried to follow the Live CD instructions outlined in the following article (adding the necessary steps to mount my RAID volumes and LVMs). [URL]
If I try mounting the root lvm as 'dev/vg-root' on /mnt and the boot partition as 'dev/md0' on /mnt/boot, when I try to run the command $sudo grub-install --root-directory=/mnt/ /dev/md0, I get an errors: grub-setup: warn: Attempting to install GRUB to a partition instead of the MBR. This is a BAD idea. grub-setup: error: Embedding is not possible, but this is required when the root device is on a RAID array or LVM volume.
Somewhere in my troubleshooting, I also tried mounting the root lvm as 'dev/mapper/vg-root'. This results in the grub-install error: $sudo grub-install --root-directory=/mnt/ /dev/md0 Invalid device 'dev/md0'
Obviously, neither case fixes the problem. I've been searching and troubleshooting for several hours this evening, and I must have my system operational by Monday morning. That means if I don't have a solution by pretty early tomorrow morning...I'm screwed. A full rebuild will by my only option.
I have a problem configuring a RAID server under Ubuntu 9.10 (kernel 2.6.31.17) with mdadm (v2.6.7.1). First I had some hardware issues that finally got solved by using another motherboard. Now I am dealing with the software part.In order to ease things, I am trying to configure a RAID 5 with three partitions in one disk. I have two HD's, one IDE where the OS lies (recognized as sda), and another where I intend to build the RAID (recognized as sdb). In this second drive I have made three partitions (sdb1, sdb2 & sdb3) of the same size.I've already re-installed Ubuntu 9.10 a couple of times, zeroed the superblocks of the partitions, repartitioned the disks with different partition sizes (I am using 5 GB partitions to save time). I've gone through this process several times, and I really don't know how to move forward now. If RAID is about trust and reliability, this is exactly what I'm not able to get.
I install 1 of the server build hardware raid 10 with hotspare. now the problem i cant monitor the raid. since it shows as single disk.
So if i need to see any crash disk/failure disk i cant monitor or get logs. can someone advise where to get and how to install the hp smart start for linux or any alternate way available in linux to perform such tasks.
I have a new home server I built this weekend with 4 x 320 GB Seagate Barracuda SATA drives. I am going to load my O.S. this week however I don't have a RAID controller so I would like to utilize 'Software RAID' via 'mdadm' package. My question is since this is a general home server with no specific function rather than hold my data reliabily and resonably fast, how do you guys recommend I configure my partitions for RAID? What level would be best with my 4 drive configuration? RAID5 or RAID10? Should I use a 3 drive RAID and use the 4th as a spare? Please let me know what you recommend as I don't have a lot of expertise with what is not practical or useless when it comes to Mdadm RAID.
I have created a software raid 5 array which is currently sitting idle all space is unallocated. My plan was to use all 4.5 TB as a single partition for multimedia files. My problem is that I am trying to set up a file server acessible to windows systems al well as linux. Is there a file system I can use to partition this space that will geve me what I want?
I have acquired a dell 2850 poweredge server and installed ubuntu server onto only to find out we cant use linux for its intended use and need to uninstall remove ubuntu and it has a raid 5 array on the server.
I've been running my dedicated server on Ubuntu 10.04 for 300 days non-stop so far (touch wood). I'm planning to purchase a dedicated server to host many large sites. The specs are Sandybridge Xeon with 16 bay x 2TB storage using RAID5 (Adaptec RAID 51645).I don't have experience with RAID, but I read that ext4 can only support filesystem up to 16TB, my plan for the system is to have 32TB of storage. How can I make Ubuntu run this configuration?
Also, can Ubuntu 10.04 recognize the Adaptec card? Going through Adaptec website there is no mention of drivers for Ubuntu (although many other distros are available).
EFI GUID Partition support works on both 32bit and 64bit platforms. You must include GPT support in kernel in order to use GPT. If you don't include GPT support in Linux kernel, after rebooting the server, the file system will no longer be mountable or the GPT table will get corrupted. By default Redhat Enterprise Linux / CentOS comes with GPT kernel support. However, *******if you are using Debian or Ubuntu Linux,******** you need to recompile the kernel. Set CONFIG_EFI_PARTITION to y to compile this feature.
Is this true? Ubuntu has no GPT support native to the server install? Never compiled a kernel. Is that of itself going to be a mind bender? How much doo doo am I going to get into if I haven't done it a few times? Trust me when I tell you its no thrill formatting and reformatting a 8tb raid drive or the individual disks if it screws up. Been on that ride way too long already. Need things to go smoothly. This is not a play toy server, but will be used in a business.
I have a remote server so the only access I have is through SSH, the datacenter where the server is located(leaseweb) say they cannot setup software raid configurations but also say it is possible to do myself.
Four disks are attached to the server with only one partitioned, now how can I setup a software raid configuration with these disks, am I correct in thinking this can only be done with the use of a kickstart script?
I've just finished booting my system via Live CD, and installing 10.04-1 to existing partitions on a hardware RAID. The install went fine but when I rebooted I didn't get past the BIOS output screens.I used four existing partitions for the install: /home (MyRAID3, which was kept as-is), / (MyRAID2, which was reformatted) , /boot (MyRAID1, also reformatted) and swap.
I have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer! So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.