I have ubuntu server 8.04 installed. It uses software raid0 (mdadm, two hard disks: sda and sdb). The server crashed after power failure. When trying to boot I get the following messages:
When I set up Ubuntu 10.10 I had only one hdd around so I installed my system with the idea that I will add the 2nd hdd for raid1 later on. Last weekend I wanted to add the hdd, but discovered, that ubuntu created a raid0 array. So I went on and tried different things: removing the 1st hdd from the raid0 array, create a raid1 with two disks, and so on... I finally could syncronize both disks but after a reboot the raid0 array appeared again with only one disk. Now I know, I should have written the mdadm.conf and fstab files... My last tries resulted in a missing superblock. Here is the story:
I'm looking to shrink my windows partition on a raid0 array and create a mdadm ubuntu partition using raid0. Is this possible? can I just ignore the /dev/mapper device and use the standard /dev/sdx devices?
My home-backup server, with 8*2TB disks won't boot anymore. Two disks failed at the same time and i rebuilt the raid 6 array without any problem, but now i can't boot the os. I'm using ubuntu server, 10.10. I've made screens of the displays to don't copy everything here. The problem at the boot:
And the Grub config: It's not a production server, but i would like to have it online. I've tried for the lasts 2 days (just a couple hours a day) but without success. I was suggested to do "mount -o remount,rw /" and than edit /etc/fstab, but it get the file don't exist error.
I upgraded my 9.10 installation to 10.04 and decided to try out btrfs on a some spare drives in the system.sudo mkfs.btrfs -m raid0 /dev/sdb /dev/sdc /dev/sddthe only way the system sees the btrfs array is by running btrfsctl -a and then mounting /dev/sdc. but that has to be done in userland, not at boot. if i try to mount it via fstab, ubuntu won't load because it can't find the mount point./dev/sdc /Images atasum,thread_pool=128,compress,rw,user 0 0so where am i going wrong? I tried mounting via the UUID also but that didn't seem to work for me either.
I've planed to install Ubuntu Lucid Lynx 10.04 x64 on my rig. dual boot setup windows 7 preinstalled Intel ICH10R RAID0 manual partition setup 200mb ext3 /boot 2gb swap 100gb ext4 / GUI installer can see my RAID and allows me to create this partitions manually. But when install begins i'm getting error :
Quote: The ext3 file system creation in partition #5 of Serial ATA RAID isw_dgaehbbiig_RAID_0 (stripe) failed
[Code]...
fdisk -l from Live cd shows me separated HDDs without RAID0 wrap (sda and sdb) Also in advanced section where i can configure boot loader default is /dev/sda, which is part of RAId_0. I'm checking isw_dgaehbbiig_RAID_0 as location for loader, assuming that this would be MBR. am i doing this step right?
I have a Raid0 setup (see below for details) connected to the ICH10R raid chip on my Gigabyte EX58-UD4P motherboard. On every second og third boot I get the following error:
Code: udevd-work[157]: inotify_add_watch(6, /dev/dm-1, 10) failed: no such file or directory udevd-work[179]: inotify_add_watch(6, /dev/dm-1, 10) failed: no such file or directory The system halts for a seconds or two and then boots on like nothing has happend.
My setup is as follows:
Two 500gb discs in Raid0 on the ICH10R. The first partition 100 mb is a Windows 7 system partition (gaming etc.) and then a 300gb partition for the actual Windos install. Then Ubuntu 10.10 with /, /home, /swap and then five data partitions. Grub is on the Windows system partition.
Ubuntu 10.10 x64 installed via USB. No problems during install. Besides that I have 2x300gb dics on the Gigabyte built in raid controller. Installed with a Hackintos setup using Mac software raid.My fstab looks like this:
Code: # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5).
I would like to make a boot RAID0 LVM on a pair of SSDs (sdc and sdd). The system is already running off of a HDD. I built the RAID0 through the Asus BIOS on the Intel controller. I see:
# cat /proc/mdstat Personalities : [raid0] md124 : active raid0 sdc[1] sdd[0] 59346944 blocks super external:/md126/0 32k chunks md125 : active raid0 sda[1] sdb[0] 222715904 blocks super external:/md127/0 32k chunks
[code]....
My challenge is that when I open Gparted, it only show one disk at a time. I would like to learn how to use Gparted to partition this RAID0 with a UEFI sector and LVM for the rest.
I'm an experienced PC technician running a slackware 13.1 distro on a server at home. I've currently got 2x 1TB drives in the box allocated as raid1 for the OS and everything I want to run on the machine. I'd like to think my question/needs are fairly straight forward, I am buying 2x 2TB drives to add to this machine as it serves as a NAS box for one of its roles. I'd like these in raid0 as the data that would be going on it is not that important to me. The question is, after adding the new drives to the system, once I boot up, what do I need to do to the system to get the drives added and working sucessfully as a single raid0 volume that I can then samba share out to my network. To clarify, I understand a little of mdadm as I've used it to raid1 my existing setup when I installed, and I'm fine with the samba bit after.
I tore down my old ftp server running FreeNAS because it had some issues. I decided to add a couple more drives and install CentOS on it instead. So now I have 4x 2TB drives that I set up yesterday.OS is installed on a usb flash drive exclusively while I chose to set up the 2TB drives in two stripes and then mirrored, for RAID0+1 instead of RAID1+0.
/dev/md0 = drive 1+2 in RAID0 /dev/md1 = drive 3+4 in RAID0 /dev/md2 = md0+md1 in RAID1
Well, the sync time for these two mirrors of blank drives is almost done, having taken some 15+ hours.I chose this setup to have a single 3.6TB file system mirrored to another 3.6TB for "backup". As this is only going to be a headless ftp server, I don't need massive redundancy/backup. A mirror should suffice if I need to pull some files.So now I'm second guessing myself. I'm wondering if it is even worth it to stripe 2 drives into a single volume with such large sync times. Since data xfer will be over internet, I don't require the massive file writing capacity that would be preferable if it were in my home as a NAS. So, the "performance" increase of RAID0 is not strictly mandatory.
I'm wondering if I should just wipe the drives and set them up in JBOD or LVM instead of RAID0 arrays.In case of boot drive failure, I'd be booting a livecd to xfer files elsewhere or recover the boot drive. So, what difference would I see if I were livebooting to md0, md1, md2 for data viewing as opposed to livebooting to an LVM or JBOD array?As an aside, the drives are WD20EARS, which are 4k sectors which I aligned to start at sector 2048 in fdisk.
New to Linux and am wanting to install ubuntu 9.10 or 10.04 on a single separate 80gb drive. I have Windows installed on 2 80gb drives in Raid0 (nvidia controller) .I have installed 9.10 but Win7 will not load from the bootloader : gives me the error invalid signature. I've looked around and tried a few things to get it to load with no success.. is the Raid0 the issue?
If I try to install 10.04 it will hang and eventually errors out, I believe on the raid drive because it comes up with dev/mapper/nvidia_hhfbdccf1..
I am new to Centos and linux in general. I have just got myself a Dell 1950 server with 2x 1T SATA2 hard disks in it. now the server comes with a PERC5i Raid card with 512Mb. Well I put these in raid 0 and the raid card initilzed the disks to 128 writeback and read ahead. When I loaded centos it did not recognize the dell layout so therefore wanted to initalize it again, so I done this as it wanted. Now I created a 80gb boot and o/s partition and a 100gb swap the rest was created into LVM space to run solus vmwear. But I found the raid 0 to be getting extreamly slow read and write speeds.
Example same disks Desktop PC max 214mb/s windows vista 64 Server with centos 104mb/s
Now I am not sure but I am told that I need to align the o/s with the raid card settings but I have no idea how to do this. How to do this in plain easy step by step instructions. I mean how to calculate it, how to format the disk this way, and what files to edit where if needed. I have spent hours trying to figure out why my raid 0 is slower than a single disk.
intending to set up an all-in-one server, i threw in the ubuntu server 10.04 (amd64) cd. during the text-install, i set up the device-topology below, and it worked.
[Code]....
then i tested my raid by hot-pulling off the sda wire (ouch). worked fine, system still worked, and it also managed rebooting from the left sdb (which of course showed up being sda, lacking the first drive). now i am trying to recover this pre-crash state. adding the first disk (showing up as sdb), i can add it to md0 and let it start syncronizing for 2 hours. but... i can?t boot anymore with the recovered first disk being sda...
at first, booting got stuck in an initrd-prompt after complaining it couldn?t find my sys-logical volume. after a lot of trial and error i don?t even get any complaints, just a black screen which would let me wait for a boot for weeks... so, my system does not boot from my first disk, whether i plug in the second or not. my second disk still boots. my last attempt to get booting fine again has been: zero sda?s first and last gigabyte to kill any ids duplicate sdb?s first cylinder to sda to make it bootable reinitialize sdb?s part.table using command o in fdisk for a new disk-id recreate sda1 partition add sda1 to md0
120GB Sata HDD - Primary OS drive 3 x 1.0TB Sata HDD - Raid 5 array
This is on a C2D MSI P35 Platinum board. Anyway, did a fresh install of F12 on the 120GB, which I had problems with - Anaconda refused to see the drive. Fedora Live could see it fine, and it was listed as an 'nvidia_raid_member' - no idea why, but I completely erased the disc under the Live CD and proceeded to install F12.
Once F12 was installed, I loaded up mdadm to re-activate my Raid 5 array, using 'sudo mdadm --assemble --uuidthe uuid) - and it started with only 2 of the 3 drives. My /dev/sdb drive did not activate into the array, due to what mdadm said was a mismatched UUID. Ok, so I erased /dev/sdb, intending to rebuild the array. Erased /dev/sdb, and then attempted 'sudo mdadm --add /dev/md0 /dev/sdb' and I get this error: "mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container" - I can find NO information on this error message.
[Code].....
I don't believe the hard drives are connected in the exact same order they were in before - I disconnected everything in the system and blew it out (it was pretty dusty)
I cant seem to get my RAID 5 (consisting of 8 1tb hard drives) assembled for some reason and I have no idea why and cant find any solutions online. Ill go ahead and show what my problem is:
here is all my hard drives:
Code: server:~$ sudo fdisk -l Disk /dev/sda: 10.2 GB, 10242892800 bytes 255 heads, 63 sectors/track, 1245 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x0004f041
[Code]....
So as you can see the array for those last four look fine however for the first four it marks the last four drives as faulty for some reason. I am kind of clueless to do from this point on honestly, I have data on this array that I'd really like to save.
I have a machine that supports 4 S-ATA drives. I have all identical drives in each slot. I am asking for someone to please tell me how I can create a RAID5 array on Linux and then also have the 4th drive (/dev/sdd) as a hot spare for any of the three drives in the array / volume?I did the following:
/device = type @ parition size /dev/sda1 = fd @ 100 MB (bootable for /boot) /dev/sda2 = fd @ 1 GB (Going to be used for Swap)
I've 2 servers (xen1 and xen2 - their hostnames) with perversion configuration below: Each server have 4 SATA disks, 1 Tb each.
16 Gb ddr3 debian squeeze x64 installed: root@xen2:~# uname -a Linux xen2 2.6.32-5-xen-amd64 #1 SMP Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux
Storage configuration: Former 256 Mb + 32 Gb of 2 of 4 disks are used as raid1 devices for /boot and swap respectively. The rest of space, 970 Gb on all 4 sata disks are used as raid10. There is LVM2 installed over that raid10. Volume group is named xenlvm (that servers are expected to use as xen 4.0.1 hosts, but the story is not about Xen troubles). / , /var, /home are located on logical volumes of small size (just found out I got mixed up with lv names and partitions, but that's not the problem, I think):
I am running a 14 disk RAID 6 on mdadm behind 2 LSI SAS2008's in JBOD mode (no HW raid) on Debian 7 in BIOS legacy mode.
Grub2 is dropping to a rescue shell complaining that "no such device" exists for "mduuid/b1c40379914e5d18dddb893b4dc5a28f".
Output from mdadm: Code: Select all # mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Nov 7 17:06:02 2012 Raid Level : raid6 Array Size : 35160446976 (33531.62 GiB 36004.30 GB) Used Dev Size : 2930037248 (2794.30 GiB 3000.36 GB) Raid Devices : 14
[Code] ....
Output from blkid: Code: Select all # blkid /dev/md0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs" /dev/md/0: UUID="2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb" TYPE="xfs" /dev/sdd2: UUID="b1c40379-914e-5d18-dddb-893b4dc5a28f" UUID_SUB="09a00673-c9c1-dc15-b792-f0226016a8a6" LABEL="media:0" TYPE="linux_raid_member"
[Code] ....
The UUID for md0 is `2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` so I do not understand why grub insists on looking for `b1c40379914e5d18dddb893b4dc5a28f`.
**Here is the output from `bootinfoscript` 0.61. This contains alot of detailed information, and I couldn't find anything wrong with any of it: [URL] .....
During the grub rescue an `ls` shows the member disks and also shows `(md/0)` but if I try an `ls (md/0)` I get an unknown disk error. Trying an `ls` on any member device results in unknown filesystem. The filesystem on the md0 is XFS, and I assume the unknown filesystem is normal if its trying to read an individual disk instead of md0.
I have come close to losing my mind over this, I've tried uninstalling and reinstalling grub numerous times, `update-initramfs -u -k all` numerous times, `update-grub` numerous times, `grub-install` numerous times to all member disks without error, etc.
I even tried manually editing `grub.cfg` to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `(md/0)` and then re-install grub, but the exact same error of no such device mduuid/b1c40379914e5d18dddb893b4dc5a28f still happened.
[URL] ....
One thing I noticed is it is only showing half the disks. I am not sure if this matters or is important or not, but one theory would be because there are two LSI cards physically in the machine.
This last screenshot was shown after I specifically altered grub.cfg to replace all instances of `mduuid/b1c40379914e5d18dddb893b4dc5a28f` with `mduuid/2c61b08d-cb1f-4c2c-8ce0-eaea15af32fb` and then re-ran grub-install on all member drives. Where it is getting this old b1c* address I have no clue.
I even tried installing a SATA drive on /dev/sda, outside of the array, and installing grub on it and booting from it. Still, same identical error.
I have been having this problem for the past couple days and have done my best to solve it, but to no avail. I am using mdadm, which I'm not the most experienced in, to make a raid5 array using three separate disks (dev/sda, dev/sdc, dev/sdd). For some reason not all three drives are being assembled at boot, but I can add the missing array without any problems later, its just that this takes hours to sync. Here is some information:
I'm performing some experiments on a box with 5x36,4G disks. Right now I have partitioned all the drives in chunks of 1G, and put them together 3 by 3 in raid5. (Yeah, there's a lot of md-devices..). 'mdadm --query' and 'mdadm --detail' give summary/detailed information about the md's, on condition that I specify the device.
My question is if there would be a way to have a summary overview of all the md-devices at once. I've gone through the manual, but haven't found anything which resembles a useful command.
My only goal is to have a raid-5 that auto-assembles and auto-mounts. Hardware: 4*2TB sata (raid disks), 1*500GB IDE (OS disk), 1*DVD IDE all plugged direct into the motherboard (nForce 750i SLI).
Starting partitions on the raid disks: gpt ext4 The problem occurs when I restart my comp after building it for the first time. I am able to see it assemble, I am able to partition it, I even mounted it Once.This is the second time I've built it so I have watched everything that happened. I don't know if this has anything to do with my problem, but when I created the raid my drive designations were: sda - 500GB(OS), sd[bcde] - 2TB(raid). When I restarted: sd[abcd] - 2TB(raid), sde - 500GB(OS).
I have a strange issue with my RAID5 array - it worked fine for a month, a couple days ago it didn't start on boot with mdadm reporting "Input/Output error" - I didn't panic restarted my computer, same error. Then opened a Disk utility and it reported State: Not running, partially assembled - don't know why, I've pressed Stop RAID Array and started it again, voila - it reported State: Running - I've checked components list and there was nothing wrong with it. So I run Check Array utility, waited almost 3 hours to finish it and it worked since than, till today's morning - I've started my computer, and here we go, same error.
See screenshots:
This is an initial state just after computer startup:
This is after I stop and start RAID5:
This is a components list:
I can see nothing wrong there yet not sure why mdadm fails on boot. I do not really like the windows solution I guess, when I check my array again, it will work fine again, but it then can fail in the same way without known reason.
I am planning on setting up a 4x1TB RAID5 with mdadm under Ubuntu 9.10. I tried installing mdadm using "sudo apt-get install mdadm", all worked fine except for the following error: Code: Generating array device nodes... /var/lib/dpkg/info/mdadm.postinst: 170: /dev/MAKEDEV: not found failed. The end result is the /dev/md0 device has not been created, as can be seen here:
Code: windsok@beer:~$ mdadm --detail /dev/md0 mdadm: cannot open /dev/md0: No such file or directory After googling, I found the following bug which describes the issue: [URL] However it was reported way back in April 2009, and it does not look like it will be fixed any time soon, so I was wondering if anyone knows a workaround for this bug, to get me up and running?
I have been having some odd issues over the last day or so while trying to get a raid 5 array running in software under Kubuntu. I installed 3 1TB drives and started up, my sd* order got all messed up( sda was now sdc and so on). This wasn't entirely unexpected, so I fixed up fstab and booted again. I found all three of the drives I installed, set them to raid auto-detect and used mdadm to create /dev/md0. I then created mdadm.conf by piping the output of mdadm --detail --scan --verbose into /etc/mdadm.conf.At this point, everything was still going swimmingly. I copied over a few hundred GB of data from another failing drive and everything seemed ok. I went to reboot once the copy was done and everything just went weird. All of the sd* drives went back to the original. Of course, this meant that the mdadm.conf was wrong. I tried to just change the device list, but that didn't work. I then deleted mdadm.conf and rebooted. The drive list stayed in the original order this time, so I just tried manually starting the array.
By erasing the partition table of the 3rd drive, I've been able to get it to the status of spare, but it says it is busy when I try to add it to the array. A grep through dmesg makes me think that md has a lock on it. I'm not sure where to go with it now. If anyone has any pointers, I would like to hear them.
Long time lurker, still a Linux noob but Im learning I currently have a home media server setup with the following hardware specs:
MSI P45 motherboard Intel Core2Quad Q6600 8GB DDR2 RAM 2x 250GB WD HDD in RAID1 via LVM (boot/swap etc) 8x2TB Hitachi HDD in RAID5 via mdadm (media/data)
The server mainly serves files for HTPCs around the house and runs a few VMs with VMWare server. I have recently picked up the following hardware which I�m thinking about upgrading to:
My main concern is will I be able to just swap the driving into the new system and everything can just pick up where it left off? More specifically, will mdadm be able to detect the 8x2TB drives attached to the new hardware and re-assemble the array?
My buddy that helped me set this system up isnt sure so I figured I ask here first, the boards do have the same ICH10R southbridge providing 6 of the SATA ports and 2 more will be run off of the extra controller onboard. I dont have a lot of Linux experience switching out core parts but in Windows Ive had great success moving things between various Intel chipsets and architectures from P965 -> P35 -> P45 -> H55 -> X58.
I have an Linksys NSLU2 with four usb harddrives attached to it. One is for the os, the other three are setup as a RAID5 array. Yes, I know the raid will be slow, but the server is only for storage and will realistically get accessed once or twice a week at the most. I want the drives to spin down but mdadm is doing something in the background to access them. An lsof on the raid device returns nothing at all. The drive are blinking non-stop and never spin down until I stop the raid. Then they all spin down nicely after the appropriate time.
They are Western Digital My Book Essentials and will spin down by themselves if there is no access. What can I shutdown in mdadm to get it to stop continually accessing the drives? Is it the sync mechanism in the software raid that is doing this? I tried setting the monitor to --scan -1 to get to check the device just once, but to no avail. I even went back and formatted the raid with ext2 thinking maybe the journaling had something to do with it. There are no files on the raid device, it's empty.
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1and I getmd1: raid array is not clean -- starting background reconstructionWhy is it not clean?Should I be worried?The HD is not new it has been used in before in a raid array but has beenrepartitionated.
Server with 4 disk partitions in an RAID 5 array using md. Yesterday the array failed with two devices showing as faulty. After rebooting from rescue, I was able to force the assembly and start the array and everything looks to be okay as far as the data goes, but when I run: Code: mdadm --examine /dev/sda3
I get (truncating to the interesting bits): Code: Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 1448195216 (690.55 GiB 741.48 GB) Array Size : 4344585216 (2071.66 GiB 2224.43 GB) Used Dev Size : 1448195072 (690.55 GiB 741.48 GB) Array Slot : 0 (0, 1, 2, failed, 3) Array State : Uuuu 1 failed
And Code: mdadm --detail /dev/md1 yields: Code: Array Size : 2172292608 (2071.66 GiB 2224.43 GB) Used Dev Size : 1448195072 (1381.11 GiB 1482.95 GB) Raid Devices : 4 Total Devices : 4 Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0
Additionally, the 'slot' for devices a-d line up like this: a - 0 b - 1 c - 2 d - 4 (!) The first number 'Array Size' from examine is twice as big as sit should be based on the output from detail and comparing to twinned server, and why does the 'Array State' and 'Array Slot' from examine indicate there's a 5th device that's not indicated anywhere else?
I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it. then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says :Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error
mdadm: Not enough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:
root@warren-P5K-E:~# mdadm --detail /dev/md0 /dev/md0: Version : 0.90
I am currently having problems with my RAID partition. First two disks were having trouble (sde, sdf). Through smartctl I noticed there were some bad blocks, so first I set them to fail, and readded them so that the RAID array will overwrite these. Since that didn't work, I went ahead and replaced the disks. The recovery process was slow and I left things running overnight. This morning I find out that another disk (sdb) has failed. Strangely enough the array has not become inactive.
Does anyone have any recommendations as the steps to take ahead with regards to recovery/fixing the problem? The disk is basically full so I haven't written anything to disk in the interim of this problem.