Server :: Best Way To RAID1 The Boot Drive?

May 17, 2010

I am self teaching everything I need to develop a home-based web server (linux/apache/php/mysql/html/css/etc...) It's quite an undertaking, but not beyond my abilities. I thought this question could have gone in either the linux - software or linux - hardware forum, and certainly not in the n00b section, but I figured it's best be put in the linux - server forum, since that's what this is related to.

I have been looking into the software and hardware RAID solutions for linux because I wanted to make sure that the boot drive of the web server I set up is mirrored with transparent disk fail/replace/recovery. I mean, setting up a boot drive for RAID1 sounded perfectly logical to me, and why wouldn't it to anybody else? So, since I knew RAID controllers were expensive, I looked into the native software RAID support in linux. My findings have revealed an issue with software raiding a boot drive in not only linux but windows as well. Apparently, if the primary drive fails (not the mirror), you have no other option but to power down the system to properly replace the failed disk, reboot, play some config crap, resync the drive, do some more config crap, reboot again, and -hopefully- it'll be ok. Well, that procedure is simply out of the question since the idea behind RAID is to transparently proceed as if nothing happened.

I'd like to know if it's even possible to RAID1 the boot drive for transparent and automatic fail/hot-swap/recover WITHOUT rebooting the system and with no intervention on my part other then replacing the drive whether it be a software raid or hardware raid solution. Eventually, what I'd like to do for a drive configuration is have 3 RAID volumes on the server configured like so:

RAID volume 1 = boot drive w/ webserver installed
RAID volume 2 = database files
RAID volume 3 = flatfile storage
Each raid volume will be a RAID1 of a 1TB drive (total = 6 x 1TB drives)

I've seen a lot of people having failure issues with the software RAID in these forums. Is this more common than not? I'm certainly not opposed to buying a hardware RAID solution as long as they're reliable and provide transparent/automatic recovery. So what's the best way to RAID1 the boot drive for transparent/automatic failover?

View 4 Replies


ADVERTISEMENT

Debian Configuration :: RAID1 On Jessie - Unable To Boot From Second Drive

Jul 28, 2015

Recently tried to make RAID1 on MBR partitions scheme on Debian Jessie - debian-8.1.0-amd64-DVD-1. The issue - I have unable to boot from second drive after grub-install /dev/sdb by any ways. RAID1 itself for / swap and home is fully functional. Decided to try the same thing on GPT, same story. How to boot from second drive on most recent Debian Branches?

View 9 Replies View Related

Server :: Raid1 On Debian Won't Boot After Growing?

Apr 21, 2010

I installed a raid1 on a debian lenny box with only 1 drive "--raid-devices=1" because I didn't have the other drive yet. When I got the other drive, I used "mdadm --grow /dev/md0 --raid-devices=2" and "mdadm --manage /dev/md0 --add /dev/sdb1" The original drive is sda1. I watched /proc/mdstat until it was completely synced, and after a reboot, the system will not reassamble the raid. It fails with "mdadm: no devices found for /dev/md0" This is where root is, therefore, I get nowhere. From a rescue cd I can disable the other drive and shrink back down to 1 device and it boots fine.

View 1 Replies View Related

CentOS 5 :: Server Won't Boot - RAID1 Errors?

Aug 9, 2010

Not sure on what is going on here. The server is RAID1 through hardware RAID. It was running an unusual high load so I rebooted it. Now it won't boot up. I am getting these errors after the CentOS boot screen:sda: Current [descriptor]: sense key: Medium ErrorAdd.Sense: Address mark not found for data field

end_request: I/O error, dev sda, sector 3040555357
device-mapper: raid1: A read failure occurred on a mirror device.
device-mapper: raid1: All sides of mirror have failed.

[code]....

View 10 Replies View Related

Ubuntu :: Mount Ext3 Raid1 Drive

Aug 10, 2011

I have bought a ICYBOX IB-NAS4220-B a while ago and kept getting issues with it (going down and not restarting, very slow etc). 2 weeks ago one more issue arose and I couldn't restart or reconnect to the box so decided to take the disks out and recover my data to a 5BIG Lacie. The IcyBox uses a software RAID1 and format drives in EXT3. Being a Linux system I thought I could easily recover data from an Ubuntu box so installed the latest version as CD boot wouldn't give me satisfactory results. I am now stuck with both 1TB drive plugged into my Ubuntu machine and can't seem to be able to mount the drives.

[Code]....

View 8 Replies View Related

CentOS 5 :: Possible To Add Second Hard Drive To Create Raid1?

Jul 2, 2010

The motherboard currently installed on my PC has a RAID Utility (Ctrl+I) at the startup that allow creating RAID1. But I already have a system installed with CentOS 5.4. In order to protect my data, I need RAID1. Can I add another Hard Drive now and have the data mirrored and synced onto both hard drives as if it was in RAID1 right from the beginning?

View 2 Replies View Related

Fedora :: Adding Second Drive To Existing System For RAID1?

Jul 31, 2011

I have an existing Fedora 15 system installed from scratch.I've ordered a harddrive identical to my SDA and want to add it to my existing system as a RAID1 setup.I've googled around and cannot find recent clear instructions how to accomplish this. I don't want to reinstall everything from scratch. It should be possible to create the RAID1 using the existing data disk and then mirror everything up?

View 1 Replies View Related

CentOS 5 :: Permanently Remove Drive From Md Array (RAID1)

May 14, 2011

I installed a distro based on CentOS 5.5 (FreePBX distro FYI). It used an automated kickstart script to create an md RAID1 array of all the hard drives connected to the machine. Well, I installed from a thumb drive, which the script in interpreted as a hard drive and thus included in the array. So, I ended up with three md arrays (boot, swap, data) that included the thumb drive. Even better, it used the thumb drive for grub boot so I couldn't start up without it. I was able to mark the USB drive as 'failed' and remove from each array, and even change grub around to boot without the usb drive, but now each of the arrays is marked as degraded:

[Code]...

View 1 Replies View Related

Ubuntu Servers :: Proper Raid1 Data Drive Formating

Feb 4, 2011

I have a ubuntu sever with a raid1 data drive formated in native linux ext3. I have searched for answers to my question but most likely I'm not asking it correctly. I want to use the data drive to store backups of files from various ubuntu and windows machines. Do I need to reformat the drive as ntfs to enable windows use or can it remain as ext3? For that matter can I reformat as ext4 as a soultion? Again, I wish to use the data drive as a backup storage.

View 3 Replies View Related

CentOS 5 Hardware :: 2TB Drive Installed RAID1 Not Mounting In Fstab

Aug 15, 2010

I have installed a 2TB drive in my dual PIII 866 with 750MB ram. The drive is properly installed and I have configured the drive with 1 partition in RAID1. The array loads fine, but when I add the entry to mount the /dev/md2 /data/repository the following error occurs The filesystem size according to the superblock is 488378000 blocks The physical size of the device is 488377986 blocks Either the superblock or partition table is likely corrupt I have run fsck manually with no errors reported. I have removed the partition and rebuilt the array. The array assembles properly and I can manually mount the /dev/md2, but as soon as I add the entry to the fstab I get dropped to a shell after a reboot. Not sure where to go now?

View 3 Replies View Related

CentOS 5 Hardware :: 4k Sectors (Advanced Drive Format In WD Lingo) And RAID1

Dec 29, 2010

I have 2 WD20EARS hard drives on the way (2 TB green WD disks with 4k sectors) and I'll be installing Centos 5.5 in RAID1 on them (2 partitions, one 16 GB / at the beginning and the rest in its own partition). I read the following thread: [URL]

and it seems that I might be having problems with the 4k sectors (Advanced Drive Format in WD lingo). I'm confused as to what exactly to do. I was thinking of downloading Fedora 14 Live CD and partitioning there and then switching to Centos 5.5 to install. Will that work? Seems I want the md 0.9 metadata because it doesn't have the space limit for me (2 TB) and it's stored at the end of the partition so it avoids alignment issues. Will I be able to make that happen with Fedora 14?

View 2 Replies View Related

Hardware :: Raid1 Mdadm Repair Degraded Array With Used Good Hard Drive?

Jun 27, 2009

I have a used but good harddrive which I'd like to use as a replacement for a removed harddrive in existing raid1 array. mdadm --detail /dev/md00 0 0 -1 removed1 8 17 1 active sync /dev/sdb1I thought I needed to mark the removed drive as failed but I cannot get mdadm set it to "failed". I issue mdadm --manage /dev/md0 --fail /dev/sda1But mdadm response is:mdadm: hot remove failed for /dev/sda1: no such device or addressI thought I must mark the failed drive as "failed" to prevent raid1 from trying to mirror in wrong direction when I install my used-but-good disk. I want to reformat the good used drive first right? I believe I must prevent raid array from automatically try to mirror in the wrong direction.

View 7 Replies View Related

Fedora :: F13 - Boot With RAID1 / LVM / Dracut?

Dec 10, 2010

I am trying to boot my F13 server that has 3 partitions (sda2,sdb2,sdc2) configured as RAID1 (md0), vg00 is on md0 and / is on vg00/lvol00 and boot is on /dev/sda1, sda3,sdb3 and sdc3 contain other non-OS data and I get the following errors
dracut scanning devices sda3,sdb3,sdc3 for LVM logical volumes vg00/lvol00 vg00/lvol01
.
dracut: Volume group "vg00" not found
dracut: Skipping volume group vg00
dracut: Autoassembling MD Raid
No root device round
and then everything stops
If I go in with rdshell and type in -

[Code]...

Ok I managed to solve it. When I originally installed F13 I had 2 partitions in md0 and later on I added /dev/sdc2 to make it a 3 partition Raid 1 array. My thinking was seeing that I had the extra space it wouldn't hurt to use it as another mirror. One month later after I booted the server I have found out it did hurt. I removed the 3rd partition and now it's fine again. I wonder though if it would be possible though to use that 3rd partition.

View 1 Replies View Related

Server :: System Image Of Intel Server RAID1?

Aug 2, 2011

I have an Intel server, which has it's two SATA HDD's in "Intel Embedded Server RAID Technology 5.4" RAID1 volume. How to proceed with a system image in case two of those SATA HDD's fail at the same time? Should one take the first HDD of RAID1 volume, connect it to another machine and execute:

Code:

# ddrescue /dev/sda1 /media/External/image_of_first_hdd /media/External/log_of_first_hdd
* HDD from the problematic RAID1 volume would be recognised as /dev/sda1 behind new machine
* /media/External/ is a mount point for large external HDD in the new machine
* log_of_first_hdd would be the log file

..and then take the second HDD to another machine and execute:

Code:

# ddrescue /dev/sda1 /media/External/image_of_second_hdd /media/External/log_of_second_hdd

how to make system image using ddrescue in case disks are in "Intel Embedded Server RAID Technology 5.4" RAID1?

View 1 Replies View Related

OpenSUSE Install :: 11.1 Machine Won't Boot To RAID1 Md0?

Mar 2, 2010

I cleverly built a machine with two drives mirrored, boot partition on the mirror. It crashed some time when I was not present, and was apparently writing to disk at the time, for since, it will not boot. I get error messages to the effect that there is no boot partition. At this point I am not particularly concerned to try to rebuild this array, though I have read a hell of a lot on the boards about trying that (and am daunted - everyone assumes you can boot the machine with the bad array). Instead, I am wondering is it possible to build a simple machine on a third disk, then "look" at the disks in the former (?) array, and/or potentially rebuild it?

View 1 Replies View Related

Ubuntu Servers :: Boot From Raid1 (mdadm) + Lvm

Aug 11, 2010

intending to set up an all-in-one server, i threw in the ubuntu server 10.04 (amd64) cd. during the text-install, i set up the device-topology below, and it worked.

[Code]....

then i tested my raid by hot-pulling off the sda wire (ouch). worked fine, system still worked, and it also managed rebooting from the left sdb (which of course showed up being sda, lacking the first drive). now i am trying to recover this pre-crash state. adding the first disk (showing up as sdb), i can add it to md0 and let it start syncronizing for 2 hours. but... i can?t boot anymore with the recovered first disk being sda...

at first, booting got stuck in an initrd-prompt after complaining it couldn?t find my sys-logical volume. after a lot of trial and error i don?t even get any complaints, just a black screen which would let me wait for a boot for weeks... so, my system does not boot from my first disk, whether i plug in the second or not. my second disk still boots. my last attempt to get booting fine again has been: zero sda?s first and last gigabyte to kill any ids duplicate sdb?s first cylinder to sda to make it bootable reinitialize sdb?s part.table using command o in fdisk for a new disk-id recreate sda1 partition add sda1 to md0

View 1 Replies View Related

Ubuntu :: Raid1 Array Won't Start On Boot

Jan 2, 2011

I created a raid1 disk with Disk Utilities on with Karmic, then upgraded my system to Lucid then Maverick. This Raid disk is just a data store, I'm not booting off of it. when I reboot the raid1 disk does not start. I have to go into Disk Utilities, stop the array and then start it. Then it comes up and I can mount it. I ran dpkg-reconfigure mdadm and it created a valid entry in mdadm.conf. but the array still does not start on boot. I want to have it auto mount with fstab but need to make sure the array starts first.

View 12 Replies View Related

OpenSUSE Install :: Unable To Boot From A Faulty RAID1?

Jul 14, 2010

opensuse 11.2 64 bits installed in a RAID1 software array. Everything worked smooth during installation, the system boots, the mdx partitions are in their places, did the upgrades, configured the way i wanted, booted several times... all ok. Then i wanted to test the RAID and since its RAID1 i just disconnected 1 drive and started the computer. All what i got is

GRUB loading stage 1.5
GRUB loading, please wait...
Error 21

Connect the drive back and the system works ok.

Disconnect the other drive, then i got
GRUB loading stage 1.5
GRUB loading, pleas wait...
(hd1,0)/message: file not found
and a simple boot menu (in character mode) shows up with the 2 options SUSE LINUX and Failsafe -- SUSE LINUX (no version number)

[Code].....

View 2 Replies View Related

General :: Mirror The Boot Partition On Existing Raid1?

Feb 26, 2011

I have a raid1 setup on a machine. Recently it died and I thought one of the drives had failed as it was shooting errors. So I tried unplugging that drive get it to boot off the mirror but it seems the techs forgot to mirror the boot device so the 2nd drive can't boot on its own. After a while it was realized that the sata cable was in fact bad and replaced so now its working again.

However, this occurrence showed a flaw in the setup where the RAID1 isn't working as its supposed to. I would like to correct this. Can I somehow mirror the boot partition so the 2nd drive will boot independent? I'm not sure how I would go about this. This is a CentOS 5 installation.

View 13 Replies View Related

Server :: Mdadm Cannot Grow Raid1 Over Lvm?

Mar 31, 2011

I've 2 servers (xen1 and xen2 - their hostnames) with perversion configuration below: Each server have 4 SATA disks, 1 Tb each.

16 Gb ddr3
debian squeeze x64 installed:
root@xen2:~# uname -a
Linux xen2 2.6.32-5-xen-amd64 #1 SMP Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux

Storage configuration: Former 256 Mb + 32 Gb of 2 of 4 disks are used as raid1 devices for /boot and swap respectively. The rest of space, 970 Gb on all 4 sata disks are used as raid10. There is LVM2 installed over that raid10. Volume group is named xenlvm (that servers are expected to use as xen 4.0.1 hosts, but the story is not about Xen troubles). / , /var, /home are located on logical volumes of small size (just found out I got mixed up with lv names and partitions, but that's not the problem, I think):

[Code]...

View 3 Replies View Related

Server :: Lvm Ontop Of Raid10 Or Combine Two Raid1 Via Lvm?

Sep 9, 2009

I'm planning to setup a ubuntu file server. I'll be using the 8.04LTS server edition. the system is probably going to have 4 harddrives. at the end they shall form an software RAID10 system. I'd like to use lvm at some point in order to able to make snapshots as I read through some mdadm and lvm docu/tutorials I could think of two possible setups:

in both cases:

small raid1 of 2 partitions that will form /boot
small raid1 of 2 different partitions as swap space

1. the rest will form 2 large raid1, which will be combined to a single virtual drive via lvm

2. make a raid10 out of the rest with mdadm, then make a lvm volume group just consisting of the 1 virtual raid0 device are there pros/cons for either solution? is lvm as powerfull as mdadm in striping? will the first solution produce less overhead?

View 3 Replies View Related

Server :: Weekly Once Resync Is Starting Itself In RAID1

Jul 25, 2011

how can I stop resyncing permenently & how can I check whether the normal sata HDDs can support RAID before/after buying HDDs. Because on every saturday or sunday resync is starting itself even there is no entry about resncing in crontab. But if I run "cat /proc/mdstat" it is showing RAID1 is perfect. see the below output

#cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb1[1] hda1[0]
513984 blocks [2/2] [UU]

[code]....

View 5 Replies View Related

Ubuntu Installation :: Fresh 10.04 Raid1 Install Will Not Boot W/o Manual Grub Input?

May 3, 2011

Followed steps 2-5 and purged/reinstalled grub now it boots as it should, NO idea where it was messed up.[URL].. I had 9.10 running in raid1 and upgraded my hardware (cpu, mb, memory etc) and wanted to do a fresh install of 10.04 to get updated. After following the various guides online such as [URL]...It begins to load grub and drops to a "grub> " shell. Which I have to do the following to get it to boot.

Code:
set root=(md1)
linux /vmlinuz root=/dev/md1 ro
initrd /initrd.img
boot

Then it boots up normally and I can use it like any other desktop. I've been over my grub.cfg and /etc/defaut/grub files and cannot find the issue. At this point I'm wondering if the fact it's a raid1 setup is keeping grub from finding it's files.

[Code]...

View 9 Replies View Related

Debian Configuration :: What Can Prevent Setting Up RAID1 On Server

Jan 7, 2011

We have the following server at collocation: [URL]

Provider's technicians were working for 3 hrs but finally were unable to set up hardware RAID1 on it.

What could prevent them from doing it? Is it difficult to set up RAID1? It is mentionned as basic function in specifications.

They said debian not booting after raid configured...

View 2 Replies View Related

Server :: Backup Server - With RAID1 - 14

Mar 6, 2011

I am a college student (compSci) that moves around a lot with a laptop. I back it up often, but I dont want a simple usb hd that can be stolen from my dorm and/or damaged (its already been damaged). I am making a file server with RAID 1 that will sit at my parents house for safer backups. I just need a few pointers, I have never experimented with RAID before.

Software: Fedora 14 - Software RAID 1. I will only have ssh running on a port other than 22, behind a router, with keyed entry only so I can remotely backup my stuff.

Hardware: A new(ish) P4 mobo with two (2TB) hd's (for RAID 1) and one small hd for the OS.

My questions:
1) Should I have the OS installed on a separate drive or on the two RAID drives? I am using software RAID, not hardware, so I assume I need two external drives for the RAID.

2) Should I be using more then two hd's for a RAID 1 array?

3) How can I encrypt the RAID drives? As I said before, I have no experience with RAID.

4) If the OS drive fails, can I just grab a new hd and install Fedora on it to get the data off my RAID array? Or do I need to image the Fedora drive every so often?

5) If one of the RAID drives fail, is there some sort of daemon that can tell me? I will not be at my house physically, so I will not be able to hear scratching platters :P. Also, because the size of a single disk in the array is 2 TB, can I just go out and get any kind of 2 TB drive to replace the failed one?

6) If the MoBo fails, can I just pop in a new one (of any kind) and continue using my same array?

View 1 Replies View Related

Server :: Benefits To Creating Multiple Partitions For RAID1 Setups?

Dec 21, 2010

I am rebuilding a bunch of servers and want to do it right. They are Dell R200s and R300s with on-board LSI SAS1068E SCSI controllers with 2 SATA drives. The only RAID level supported on these cards is RAID 1. So, to the server, we have 148GB of space to deal with. They currently run 32-bit Ubuntu 8.10; I will be installing x64 Ubuntu 10.04.

I have always seen that it is best practice to partition in such a way that /boot, /var/log, /temp, and /home for example are separated out from /. Usually this is on a RAID5 or higher box. Is there any benefit to doing that sort of thing on a RAID1 box? I realize that this is in some ways a matter of opinion, but I would like the opinion of folks with experience. I'm pretty new to Linux in general.

The main services running on these boxes are Apache2, Tomcat6, MySQL, and Java.

View 2 Replies View Related

Fedora Hardware :: Software RAID1 /boot Volume Doesn't Mount Automatically At System Startup

Feb 7, 2010

My software RAID setup is as follows.

/dev/md0 (made from sda1 and sdb1) RAID1 /boot partition
/dev/md1 (made from sda2, sdb2, and sdc2) RAID5 / partition

Earlier on I had some trouble with my sda drive, it dropped itself from both arrays, screwing up the mirroring of my two raid partitions participating in the /boot partition. I eventually got everything sorted out and back in sync. (I also have grub installed to MBR on both sda and sdb). Things are working fine regarding that, but since then I've had this issue:

During boot up, I'll get an error message that it could not mount my /boot partition (when fstab is set to either /dev/md0 or the UUID). It claims c9ab814c-47ea-492d-a3be-1eaa88d53477 does not exist!

My fstab:

Code:

[mark@mark-box ~]$ cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Wed Jan 20 16:34:41 2010

[code]....

As far as I know, it isn't neccessary for /boot to be mounted always, correct? Although, as I understand, I need to have it mounted whenever making kernel changes correct?

View 2 Replies View Related

Server :: Mdadm Software Raid1 Failed Disk Detection Too Long

Jul 22, 2011

I have SLES10-SP3 running on an Intel SR1600URHS board with 3 hot-swap SATA disks configured using mdadm as Raid1 with hot spare. If I pull one of the active disks, all file i/o will stop for about 2.5 minutes after which it will start again and the raid array will be rebuilt using the spare disk. Is there any way I can reduce this 2.5 minutes of inactivity? I've tried setting /sys/block/sdX/device/timeout and /sys/block/sdX/device/retries to 1 for all disks, but this hasn't made any difference. The output from messages is:

12:11:56: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2 frozen
12:11:56: ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 cdb 0x1e data 0
12:11:56: res 40/00:03:00:00:20/00:00:00:00:00/b0 Emask 0x4 (timeout)

[code]....

View 1 Replies View Related

Ubuntu Installation :: Unacceptable Write Performance Of Software Raid1 On Lucid Server?

May 20, 2010

Compared to my laptop notebook with a HD of 5400rpm, the write performance of raid1 on an ubuntu lucid server is unacceptable. In the begining, I installed ubuntu 9.04 server(alternate) using raid1 with two WD 1TB HDs of 7200rpm(Green Power) and then performed dist upgrade to 9.10 and then to 10.04.

I guess the write performance initially was reasonable since the installation and data migration(copy from another computer over LAN) didn't take too much time. However, after upgrading the server to 9.10 or so, I found large file upload through samba or ftp tends to block and time out. It is of no use whether to change the daemon or the client program so that I tried to test the read/write performance on the server to figure out the situation.

To my surprise, using strace I found even a simple program like cp would easily get blocked eventually in a write() system call for decades of seconds. Hence, I perform another disk writing test using dd for data size ranging from 50MB to 1GB. Performance test commands are listed as follows:

Quote: dd if=/dev/zero of=test.img count=[5|10|15|20|100] bs=10M

if the data to write is equal or fewer than 150MB, the command returns immediately at very hight speed but the raid disks starts to sync and busy so that the terminal prompt seems to freeze. I think this behavior is normal under the raid1 configuration, isn't it?

But when the data size is equal to 200MB, the test command blocks for seconds and the write speed is measured at about 16.6MB/s. Of course, the raid disk still starts to sync and busy afterwards. Next, I test writing with data of size 1GB. The command blocks so long for about 770 seconds(<2MB/s) while the same test runs for only 17.49 seconds(60MB/s) on my laptop.

I also burn a Lucid LiveCD to boot the server and mount the raid device to run the test again but the results remain similar. Does that means even I re-install the system on the raid, the problem never disappears?

PS: the disks run under the mode of UDMA6 without change.

View 2 Replies View Related

CentOS 5 :: Fresh Installed Server With Large Partition In RAID1 Config High Serverload

Mar 15, 2009

Yesterday I installed a new server with a large partition for my XEN images. This partition is a about 930GB. The installation tooks ages and after he finished I was finding out why that is. The SoftRAID1 I configured is rebuilding the large partition.

View 7 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved