General :: Creating A RAID1 Partition With Mdadm On Ubuntu?
Jan 28, 2010
I'm trying to set up a RAID1 partition on my Ubuntu 9.10 workstation.On this dual-boot system, Ubuntu is running from a separate drive (/dev/sdc - an SSD that is quite small, which is why I need more disk space). Besides that, there are two traditional 500 GB hard drives, which have Windows 7 installed (I want to keep the Windows installation intact), and about half of the space unallocated. This space is where I want to set up a single, large RAID1 partition for Linux.
(This, to my understanding, would be software RAID, whereas the Windows partitions are on hardware RAID - I hope this isn't a problem... Edit: See Peter's comment. I guess this shouldn't be a problem since I see both drives separately on Linux.)On both disks, /dev/sda and /dev/sdb, I created, using fdisk, identical new partitions of type "Linux raid autodetect" to fill up the unallocated space.
Device Boot Start End Blocks Id System
/dev/sda1 1 10 80293+ de Dell Utility
/dev/sda2 * 11 106 768000 7 HPFS/NTFS
[code]....
But so is "Device or resource busy" when trying to create the RAID array. Quite strange.
Update: Could the device mapper have something to do with this? How do /dev/mapper and dmraid relate to all this mdadm stuff anyway? Both provide software RAID, but.. differently? Sorry for my ignorance here. Under /dev/mapper/ there are some device files that, I think, somehow match the 3 Windows RAID partitions (sd{a,b}1 through sd{a,b}3). I don't know why there are four of these arrays though.
$ ls /dev/mapper/
control isw_dgjjcdcegc_ARRAY1 isw_dgjjcdcegc_ARRAY3
isw_dgjjcdcegc_ARRAY isw_dgjjcdcegc_ARRAY2
After some hours of googling, I've managed to increase the size of the default ramdisks (/dev/ram0-16) to 1 GiB each, I raided them together with mdadm to try it out, then created a filesyste, mounted it etc etc. No problems. The problem comes when I used gparted to move my windows partition over and in the unallocated space (1 GiB), I created an unformatted partition (/dev/sda2)Now when I try to create the raid array I get the following:
Code: :~$ sudo mdadm --create /dev/md0 -l 1 -n 2 /dev/ram0 /dev/sda2 mdadm: Cannot open /dev/sda2: Device or resource busy
I am running Kernel 2.6.18-128.el5 on a 64bit quad core machine with 8GB RAM. Using "mdadm" I setup a RAID1 array between two Western Digital 1.5TB drives. The problem is that the resync is running VERY slow. Here is a current status.
I'm looking to recover a RAID1 array hopefully using mdadm. Ive not really used Linux much befor but I'm keen to learn to get my data back. Basically one of the disks in my Maxtor Shared Storage II (2x500GB sata) died and I could do with either rebuilding the array or getting the data off another way.
I have a spare machine I could use for recovery process. It has a spare drive but its only 120Gig, I also have a bigger 320gig disk but thats IDE not SATA. Do I need to purchase another 500GB sata drive or can I use either of my spares? If i do need to buy a new drive could I use a 1TB or 1.5TB or will it have to be 500? Next question is what is that best version of linux to use, I have knoppix 6.2 and Ubuntu (not sure on version) already. I noticed that mdadm isn't installed by default on Ubuntu.
intending to set up an all-in-one server, i threw in the ubuntu server 10.04 (amd64) cd. during the text-install, i set up the device-topology below, and it worked.
[Code]....
then i tested my raid by hot-pulling off the sda wire (ouch). worked fine, system still worked, and it also managed rebooting from the left sdb (which of course showed up being sda, lacking the first drive). now i am trying to recover this pre-crash state. adding the first disk (showing up as sdb), i can add it to md0 and let it start syncronizing for 2 hours. but... i can?t boot anymore with the recovered first disk being sda...
at first, booting got stuck in an initrd-prompt after complaining it couldn?t find my sys-logical volume. after a lot of trial and error i don?t even get any complaints, just a black screen which would let me wait for a boot for weeks... so, my system does not boot from my first disk, whether i plug in the second or not. my second disk still boots. my last attempt to get booting fine again has been: zero sda?s first and last gigabyte to kill any ids duplicate sdb?s first cylinder to sda to make it bootable reinitialize sdb?s part.table using command o in fdisk for a new disk-id recreate sda1 partition add sda1 to md0
I've 2 servers (xen1 and xen2 - their hostnames) with perversion configuration below: Each server have 4 SATA disks, 1 Tb each.
16 Gb ddr3 debian squeeze x64 installed: root@xen2:~# uname -a Linux xen2 2.6.32-5-xen-amd64 #1 SMP Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux
Storage configuration: Former 256 Mb + 32 Gb of 2 of 4 disks are used as raid1 devices for /boot and swap respectively. The rest of space, 970 Gb on all 4 sata disks are used as raid10. There is LVM2 installed over that raid10. Volume group is named xenlvm (that servers are expected to use as xen 4.0.1 hosts, but the story is not about Xen troubles). / , /var, /home are located on logical volumes of small size (just found out I got mixed up with lv names and partitions, but that's not the problem, I think):
I have a RAID1 array, where mdadm states that one of the disks is "removed." Naturally, I assume one of the drives has failed. The mdadm --detail command tells me that the sda drive has failed. However, further inspection from the mdadm -E /dev/sdb1 command says that sdb1 disk has been removed. I am a bit confused. Can someone clarify which drive is failed? Am I misreading the command outputs?
Posted this on the centos forum too, but I might get better attention here. I just moved my centos server to a mdadm raid1 array. I went partially after this guide: [URL].. What I did was to boot up a livecd and made three partitions on both of my empty disks, one for / one for swap and one for /vz (it's an openvz server). Made those partitions into seperate raid1 arrays and then rsync-ed everything from the old disk to the new partitions.
After I had moved everything I did chroot into the new raid array and edited both grub config files and fstab, according to the guide.
[Code]...
I have managed to run the system on the raid1 disks when using super grub2 disk off a cd, but it has it's own grub and can boot any distro, so I can see that the system is working fine, except for grub. I have tried installing grub both from a livecd (ubuntu 64bit) and when booted into the raid1 array, but it gives the same results as stated above.
One of the hard drives in my server failed the other day, backups saved the day and downtime was only a few hours, but when setting up the new drive I went ahead and migrated to software RAID, in the hopes it may give me less downtime in the future when a drive fails. It all went rather well, but my main root partition won't finish syncing for some reason.
sda was the original drive with sda4 as /, sda1 as /boot, and sda2 as swap. sdb was the drive that failed and was replaced with the new drive. So I set up sdb with the same partitions of sda, added it to a RAID1 array, copied files from sda, and reboot to md4 as /, md1 as /boot, and md2 as swap. I added the sda partitions to the array, and the sync went off without a hitch on md1 and md2, md4 progresses well, but after a few hours /proc/mdstat just shows this:
I'm convinced that mdadm is going to be the death of me. I've wasted numerous hours on this so far without luck.
OpenSuse 11.4 on an old Supermicro box, creating a software RAID1 array across 2 x IDE 500GB disks. Creating /dev/md0 as a 250MB partition across /dev/sda1 and /dev/sdd1 for /boot, another 465GB partition across /dev/sda2 and /dev/sdd2 as an LVM partition to hold volumes for the various other OS filesystems. After the initial installation and configuration there were a series of mishaps with faulty IDE cables that had drives failing to show up at boot. Somehow, /dev/sdd2 got configured to array /dev/md1 as a spare drive. And nothing I've done so far gets it to show up as an active drive.
The obvious step of failing the partition, removing it, then adding (or re-adding) will bring it back as a spare. I've tried roughly a dozen different permutations of those same steps. The latest was to 'dd if=/dev/zero of=/dev/sdd2' to clear the partition. Thought this might be the trick - after the zero, mdadm -E /dev/sdd2 reported 'no superblock' and no md1 configuration.
So 'mdadm --add /dev/md1 /dev/sdd2' and it still comes back as a spare. Here is mdadm -D /dev/md1
/dev/md1: Version : 1.0 Creation Time : Sat Jul 9 10:26:01 2011 Raid Level : raid1 Array Size : 488119160 (465.51 GiB 499.83 GB) code....
I can't stop this array, the OS is running from there. I can't easily boot from CD to repair, all IDE ports have disks attached.
Does anyone have an incantation to promote a spare to active?
I have SLES10-SP3 running on an Intel SR1600URHS board with 3 hot-swap SATA disks configured using mdadm as Raid1 with hot spare. If I pull one of the active disks, all file i/o will stop for about 2.5 minutes after which it will start again and the raid array will be rebuilt using the spare disk. Is there any way I can reduce this 2.5 minutes of inactivity? I've tried setting /sys/block/sdX/device/timeout and /sys/block/sdX/device/retries to 1 for all disks, but this hasn't made any difference. The output from messages is:
12:11:56: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2 frozen 12:11:56: ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 cdb 0x1e data 0 12:11:56: res 40/00:03:00:00:20/00:00:00:00:00/b0 Emask 0x4 (timeout)
I've faced the problem with server freeze on heavy write.
System
CentOS 5.5 x64_86 with latest updates and kernel (2.6.18-194.32.1). Also tried 2.6.18-194.26.1 and 2.6.37-2 from ELRepo with the same results. CPU: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz Memory: 3 x 2Gb DDR3. HDDs: 2 x Western Digital WDC WD1002FBYS-02A6B0
I have a used but good harddrive which I'd like to use as a replacement for a removed harddrive in existing raid1 array. mdadm --detail /dev/md00 0 0 -1 removed1 8 17 1 active sync /dev/sdb1I thought I needed to mark the removed drive as failed but I cannot get mdadm set it to "failed". I issue mdadm --manage /dev/md0 --fail /dev/sda1But mdadm response is:mdadm: hot remove failed for /dev/sda1: no such device or addressI thought I must mark the failed drive as "failed" to prevent raid1 from trying to mirror in wrong direction when I install my used-but-good disk. I want to reformat the good used drive first right? I believe I must prevent raid array from automatically try to mirror in the wrong direction.
I been trying all day to boot debian on a lvm partition on a raid1. I have found some howtos but they only show how to do it for one or the other not both at the same time. Using those howtos I think I have grub2 setup right the problem is my kernel. It has support for both LVM and Raid built-in. I setup the raid and lvm partitions while running that kernel. But when I use it to boot up the system on the lvm/raid it gives a kernel panic.
The OS is by itself on an old disk sda1. The raid1 is on two other disks sdb1 & sdc1. It is divided into 2 logical partition vg-root & vg-media. I just copied the OS onto vg-root. Then tolled grub to boot to it. The grub entry is like so..I tried setting root=(md0) but that didn't work either. I'm pretty sure the problem is with the kernel but I don't see why since it can it can see the raid and lvm partitions once it is booted up and both the raid & lvm options are built into the kernel so it should be able to see them at boot time.
I have a raid1 setup on a machine. Recently it died and I thought one of the drives had failed as it was shooting errors. So I tried unplugging that drive get it to boot off the mirror but it seems the techs forgot to mirror the boot device so the 2nd drive can't boot on its own. After a while it was realized that the sata cable was in fact bad and replaced so now its working again.
However, this occurrence showed a flaw in the setup where the RAID1 isn't working as its supposed to. I would like to correct this. Can I somehow mirror the boot partition so the 2nd drive will boot independent? I'm not sure how I would go about this. This is a CentOS 5 installation.
a server that was running a hardware isw raid on the system (root) disk. This was working just fine until I started getting sector errors on one of the disks. So, I shutdown the system and removed the failing drive and installed a new drive (same size). On reboot I went in to the intel raid setup and it did show the new drive and I was able to set it to rebuild the raid. So, continuing the reboot everything came up just fine except the raid 1 on the system disk. I have tried many times to get the system to rebuild the raid using dmraid, but to no avail it would not start a rebuild. In order to get the system back up and make sure that the disk was duplicated I was able to 'dd' the working disk to the new disk that was installed.At present when I look at the system it does not show up with a raid setup on the system disk ( this comprises the entire 1TB disk with w partitions sda1 as / and sda2 as swap).Problem:I have decided to forego the intel raid and just use mdadm. I have a test system setup to duplicate (not the software, but the disk partitions) the server setup.
Code: [root@kilchis etc]# fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
I am rebuilding a bunch of servers and want to do it right. They are Dell R200s and R300s with on-board LSI SAS1068E SCSI controllers with 2 SATA drives. The only RAID level supported on these cards is RAID 1. So, to the server, we have 148GB of space to deal with. They currently run 32-bit Ubuntu 8.10; I will be installing x64 Ubuntu 10.04.
I have always seen that it is best practice to partition in such a way that /boot, /var/log, /temp, and /home for example are separated out from /. Usually this is on a RAID5 or higher box. Is there any benefit to doing that sort of thing on a RAID1 box? I realize that this is in some ways a matter of opinion, but I would like the opinion of folks with experience. I'm pretty new to Linux in general.
The main services running on these boxes are Apache2, Tomcat6, MySQL, and Java.
We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode: dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.
I just installed Ubuntu server edition to my computer (brand new, no OS) and finished installation. In the terminal I used apt-get ubuntu-desktop to install a desktop interface.In my rig, I have two 500GB HDDs. I set them up through my computer BIOS as RAID1 drives, yet as I understand I still need to configure the Ubuntu software raid for it to work correctly. Unfortunately, I already partitioned my drives! I used the easy way (guided with LVM or whatever) and let it do it for me. Now, RAID1 is very important to me! Is there anyway to repartition the disks to use RAID1, or do I need to wipe my computer and reinstall Ubuntu?
I'm trying to create an extended partition. In GParted, I shrunk the size of the existing partition and now want to create a new EXTENDED partition in the free, unallocated space. GParted only lets me create a PRIMARY partition. What am I doing wrong here?
Here's what I've got right now:
You can actually ignore the flag for the swap as "boot." That was me just messing around trying to get it to work. I've removed that flag. Not sure how the question of boot affects all of this...maybe it factors in somehow.
I am a newbie to Linux and I am using CentOs. I am trying to create a new partion on my CentOs VM. I create a new primary partition using fdisk (I use the command fdisk /dev/hda). After I create the partition and use partprobe to write the partition to disk, I try to give the new partition a label. So, I use the command e2label /dev/hda LABEL=test
However, when I enter the command e2label /dev/hda3 , it doesn't display the label for the newly created partition. Am I doing something wrong here? Is the syntax of the e2label command wrong when creating the label for the new partition? Did I miss a step after writing the new partition to disk.
I have built a small test server. I am planing on using this machine a an email and web server to test out its hosting capacities. in the future I will build a larger and more well equipped version.
AMD Athlon x2 2.0ghz 2 160gb SATA drives (hardware raid 1, done through the Motherboard) 2 gb ram (dual channel)
Like I said small test server. I am trying to install 10.04 server edition. When I get to the point of partitioning it asks me to activate the raid so I do. I get through the guided partitioning and get ready to write the file system to the drives and the screen goes red and says that it has failed. On a side note, this works if i install it on the same drives without any raid configuration.
I'm following the book RHCE book (5th edition) by Michael Jang. On the exercise on pg.140, creating partitions, I've created /boot (hda1), swap (hda2) and / (hda3). So far so good.
Next, I'm supposed to make an extended partition, containing the rest of the disk. So this should be hda4, right? But when I try to create either an LVM, or RAID partition, it creates hda4 AND hda5 under hda4. Why is that? Am I doing something wrong? The book next asks me to create /var as hda5, so if hda5 is already created automatically above, how am I supposed to create /var?
I have two hard drives in my desktop. One HD has a working Ubuntu system-hence the ability to post here- and the other contains Windows XP Pro. When the XP drive crashed I was able to re-install an image I had saved using Acronis. Unfortunately the dual-boot option at startup is no longer available. I can only boot to Ubuntu. Not so bad really but there are some programs on Windows that I need to use. Is there any way, using Grub perhaps, that I can reconfigure an MBR to include the second hard drive and the Windows system?
I want to encrypt Full partition instead of creating a file and encrypting it, and also want to move this disk to another server. do i need some files also (that hold keys) with my self on new server. i am using FC11.
I was trying to install Debian 5.04 on a Mac G4, and in typical geek tradition, I didn't RTFM. During installation, I nuked all existing partitions, creating new to my liking. But as I learned later during the installation process, yaboot needed a NewWorld partition, so I can't boot the installation. I don't have any OSX CDs with me (this is a used G4 I purchased of craigslist) with which to create a HFS partition.I've re-run the Debian installer, which lets me create a partition that is supposed to be of type 'NewWorld', but the installer does not seem to like it or recognizes it.
Yesterday I installed a new server with a large partition for my XEN images. This partition is a about 930GB. The installation tooks ages and after he finished I was finding out why that is. The SoftRAID1 I configured is rebuilding the large partition.
120GB Sata HDD - Primary OS drive 3 x 1.0TB Sata HDD - Raid 5 array
This is on a C2D MSI P35 Platinum board. Anyway, did a fresh install of F12 on the 120GB, which I had problems with - Anaconda refused to see the drive. Fedora Live could see it fine, and it was listed as an 'nvidia_raid_member' - no idea why, but I completely erased the disc under the Live CD and proceeded to install F12.
Once F12 was installed, I loaded up mdadm to re-activate my Raid 5 array, using 'sudo mdadm --assemble --uuidthe uuid) - and it started with only 2 of the 3 drives. My /dev/sdb drive did not activate into the array, due to what mdadm said was a mismatched UUID. Ok, so I erased /dev/sdb, intending to rebuild the array. Erased /dev/sdb, and then attempted 'sudo mdadm --add /dev/md0 /dev/sdb' and I get this error: "mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container" - I can find NO information on this error message.
[Code].....
I don't believe the hard drives are connected in the exact same order they were in before - I disconnected everything in the system and blew it out (it was pretty dusty)