Ubuntu Server 11.04 i386. I've used linux on and off for years but only in small doses, so I'm really just at newbie level. I was running an Openfiler NAS, but decided to give Ubuntu+Webmin a try. And up 'til now I've been happy with progress. I have set up a RAID-6 array using 5 x 1TB SATA drives. I've ensured that the array is in a "clean" state, and now I want to do some failure testing. The problem occurs when I remove one of the drives in the array. I shutdown, remove a drive, then boot up. The array wont start at all, and comes up with this error during boot:
the disk drive for /mnt/raidvol1 is not ready yet or not present
Continue to wait; or Press S to skip mounting or M for manual recovery
If I wait, nothing happens. Obviously the RAID array should start in degraded mode, but it fails to mount at all. When I press "M" to go into manual recovery and type "mount -a" I get the response:
mount: special device /dev/RAIDVG1/RAIDLV1 does not exist
I have set BOOT_DEGRADED=true in /etc/initramfs-tools/conf.d/mdadm without success. If I reconnect the disconnected drive, the array works fine, and is in a clean state.
so my servers 7 hds in raid 5 all was working well until one of them died. The HD that died sort of works it can read like half a file also freezes on the benchmark test in disk utility. Unfortunate when i take it out on boot it says. The drive for /media_kbt is not ready or present press s to skip or m for manual recovery. I hit s and then go to disk utility. But i can't start or add disks to the array.
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I have been having this problem for the past couple days and have done my best to solve it, but to no avail. I am using mdadm, which I'm not the most experienced in, to make a raid5 array using three separate disks (dev/sda, dev/sdc, dev/sdd). For some reason not all three drives are being assembled at boot, but I can add the missing array without any problems later, its just that this takes hours to sync. Here is some information:
I also get sent to a Busybox (initramfs) shell with no text editor and don't know how to copy all the error messages and post them here. If there is a way, let me know. I've typed it out in the meantime:
Code: md0 : inactive sdxxxx Attempting to start the RAID in degraded mode... mdadm: CREATE user root not found mdadm: CREATE group disk not found
This is with a 3 disk RAID5 array. I turned off the system, pulled out a drive, and started it back up. Fresh install, all I've done so far is apt-get update and upgrade.
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
I recently upgraded from lenny to squeeze, and my raid array is degraded immediately after boot. Some info on my machine: I've got a built-in SATA II chipset with 4 drives /dev/sda-d that I use for my RAID5 system, and an IDE drive /dev/sde. Before I upgraded to squeeze, the IDE drive was at /dev/hda. I did the usual 2-step upgrade (kernel/udev first, reboot, then everything else). After the first reboot, the IDE drive became /dev/sda and the SATA drives were /dev/sdb-e. I updated mdadm.conf to reflect the new drive naming and added /dev/sde to the array; it rebuilt successfully everything was back online. After the 2nd reboot, the IDE drive became /dev/sde and my SATA drives went back to /dev/sda-b. No biggie; updated mdadm.conf again, rebuilt, and everything works.
Now that everything has been upgraded, the RAID array still becomes degraded upon boot. I can always add /dev/sda back to the array, and it's always rebuilt successfully. Here are some interesting lines from dmesg:It finds all my drives:
I have an HTPC that was giving me insane amount of problems after 3 months of good use.
1x 250gig Samsung Drive (OS Drive)
3x 1TB Western Digital Caviar Green (Raid-5)
In 9.10, the raid was working fine. I decided to fresh install Ubuntu 10.04 and I can't seem to start the raid array. In Disk Utility, the array shows up but when I try to start it I get the error "Not enough components to start the array"
I've tried to assemble the array using mdadm and the following:
I have an Areca hardware RAID array that I'm trying to format & partition on a fresh Ubuntu 10.04 LTS installation. The OS drive is not on the RAID card, it's entirely separate. The RAID is a 6TB volume so I realize I have to use parted to format it, not fdisk (which I've always relied on).
My problem is that I can't figure out how to get parted to like my settings. It seems like everything I try gives me the warning "Warning: The resulting partition is not properly aligned for best performance." Here's what I'm doing:
Code: (parted) p Model: Areca ARC-1280-VOL#00 (scsi)[code].....
What start/end settings should I use to get a properly aligned partition? How do I know?I have tried a mix and match of 0, 0s, 1, 1s, -0, -0s, -1, -1s, 100% for my start/end with no success.
This is message I get when I try and start itmdadm: /dev/md0 assembled from 2 drives - not enough to start the arrayBelow is the information I've collected about any help on how I can get the raid back up and going to I can get the data off of it would be awesome
I just restarted my server (Ubuntu 9.04 server, running on ESXi 4.0) and while copying files onto the server using samba I got strange problems and the connection was lost. When I rebooted the total system, so ESXi as well as Ubuntu Server I did find problems on my RAID disk.
The directory, where the new files were added I have a lot of files, but a lot of them do not have any info except their name:
I'm currently experiencing some serious issues with WRITE performance on a RAID-1 array. I'm running Ubuntu 10.04 64 bit server with the latest updates. To evaluate the performance ran the following test: [URL]... (great article btw!) Using dd to measure, write performance is only at 8.7 MB/s. Read is great though at 74.5 MB/s. The tests were ran straight after rebooting and I have not (YET!) done any kernel tuning or customization, running the default server package of the Ubuntu kernel. Here's the motherboard in the server: [URL]... with a beta bios to support drives over 300GB.
As you can see from the bo column there is definitely something stalling. As per top output, the %wa (waiting for i/o) is always around %75 however as per above, writes are stalling. CPU is basically idle all the time. Hard drives are quite new and smartctl (smartmontools) does not detect any faults.
I have an ubuntu 10.04 machine that I use primarily as a file server. I have a RAID5 array built with mdadm from 3 component disks that worked properly until a recent upgrade (I'm not sure exactly what broke it though). The array is /dev/md0 and is set to mount at /var/media on bootup. *Now*, when the system cold boots it hangs partway through the bootup sequence and throws the following error:
The disk drive for /var/media is not ready yet Press S to skip ... Once I "S"kip this manually, I can see that LOWER in the boot sequence mdadm gets called and assembles the drive, and once fully booted into the system I can then simply do a "mount -a" and the array mounts properly. SO... my gut feeling is that some portion of one of the upgrades changed the order in which things are called, and now the "mdadm assemble" is not triggered until AFTER the system tries to mount the drives. My problem is that I don't know the stuff that controls the boot sequence well enough to dig in the right place.
As a workaround I can remove that entry from /etc/fstab, but then (of course) the system won't auto-mount the array. It's better than the boot process completely hanging because as least THIS I can fix remotely, but I'd really like to know
1) why this broke in an upgrade and is it a known problem? 2) how to get it back to where it auto-assembles and then auto-mounts the array on bootup.
I am trying to use 3 3TB Western Digital drives in a raid 5 software array. The trouble seems to be that the array is created with only 1.5 TB of capacity, rather then the expected 6 TB.
Here are the commands and output: $ sudo dmraid -f isw -C BackupFull6 --type 5 --disk /dev/sde,/dev/sdf,/dev/sdg --size=5589G Create a RAID set with ISW metadata format RAID name: BackupFull6 RAID type: RAID5 RAID size: 5589G (11720982528 blocks) RAID strip: 64k (128 blocks) DISKS: /dev/sde, /dev/sdf, /dev/sdg About to create a RAID set with the above settings. Continue ? [y/n] :y
$ sudo dmraid -s *** Group superset isw_cdjhcaegij --> Subset name: isw_cdjhcaegij_BackupFull6 size : 3131048448 stride : 128 type : raid5_la status : ok subsets: 0 devs : 3 spares : 0
So I cannot understand why the size of the created array is only 3131048448 or about 1.5 TB. The first command seemed to imply it was going to create an array with 5589GB.
System is: Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid
I've got a couple of new hard disks that I have partitioned (3 partitions per disk) and set up in a mirrored software raid array using mdadm. They've synced, I've put file systems on them (1 x ext4, 2 x luks + ext4) and I can mount them. I've checked the partitions using fdisk. I've checked the filesystems using fsck. So far so good. Next step is that I'd like mdadm to automatically assemble them on boot. (Not bothered about mounting and crypttabing yet.)
I've used sudo /usr/share/mdadm/mkconf to generate a new mdadm.conf with the appropriate UUIDs for the new partitions. I've checked that this matches the output of sudo mdadm --detail --scan
Short story: I have a problem with one of my services (mediatomb) - it requires an md RAID array to be mounted in order to start, because it uses files from it. $remote_fs is added by default to the "Required-Start" line of the init script, so I thought that this should be enough. However, the mediatomb service fails to start on boot, but starts just fine when I execute "service mediatomb start" later. The array is entered in /etc/fstab and is automatically mounted on boot.
This is my file server (Ubuntu Server 10.10), which has a raid array created with mdadm (mounted on /z), and the root filesystem is located on an USB thumb drive. I've installed mediatomb, but I wanted to put its database files on the raid array instead of the root fs, so I've symlinked /var/lib/mediatomb (the default path) to /z/mediatomb on the array. This is because the mediatomb DB is supposed to be updated fairly often, so I didn't want it to stay on the flash drive.
Problem is, the mediatomb service can't start on boot - in /var/log/mediatomb.log, it says "2011-03-07 19:22:47 ERROR: /var/lib/mediatomb : 20 x No such file or directory". As I said, it works fine when manually started later...
This is the fstab entry for the raid array code...
I have a used but good harddrive which I'd like to use as a replacement for a removed harddrive in existing raid1 array. mdadm --detail /dev/md00 0 0 -1 removed1 8 17 1 active sync /dev/sdb1I thought I needed to mark the removed drive as failed but I cannot get mdadm set it to "failed". I issue mdadm --manage /dev/md0 --fail /dev/sda1But mdadm response is:mdadm: hot remove failed for /dev/sda1: no such device or addressI thought I must mark the failed drive as "failed" to prevent raid1 from trying to mirror in wrong direction when I install my used-but-good disk. I want to reformat the good used drive first right? I believe I must prevent raid array from automatically try to mirror in the wrong direction.
I recently upgraded from 10.04 to 11.04 and I now often get boot messages about a degraded raid.
I'm fairly experienced, but I'm confused which raid it is talking about. I have a raid5 array, but I don't boot of that, and it seems fine when I finally get it to boot. Previously, I didn't have any other raid arrays, but now I seem to have two others called md126 and md127, they both seem to be degraded. Where did they come from?
 I *do* have two 80GB drives that I was booting from in RAID1, but that was a looong time ago, and I have since only booted from one of them. The partition table indeed shows partitions 1 and 5 are raid autodetect and /proc/mdstat shows they are degraded ([U_]). Could it be that this is causing the problem? If so, why has this only started to happen since the upgrade from 10.04 to 11.04?Anyway, perhaps it is a good idea to add in that second disk to the raid1 array. If so, how to do that? Note that, I've also noticed that when I boot and get to the screen when I select from the different kernel versions, I now get a couple of really old ones too - my thought is that these are from the raid1 disk that I stopped using. If I add it to the array, how can I be sure it will mirror in the correct direction?
It could be that I have fairly recently plugged in that second RAID1 disk, after a long time of not having enough spare sata sockets (I switched my RAID5 array from 8 disks to only 3 disks, so suddenly had a lot more spare sockets).
I've recently started having an issue with an mdadm RAID 6 array that been operational for about 2500 hours.
Intermittently during write operations the array stalls, dropping to almost 0 write speed for 10-30 seconds. When this occur one or both of the 2 drives attached to a 2 port Silicon Image si3132 SATA-II controller "locks up" with its activity light locked on. This just started occurring within the last week and didn't seem to coincide with any update that i noticed. The array has just recently passed 12.5% full. The size of the write does not seem to make any difference and it seems completely random. Some times copying a 5 GB dataset results in no slow down other times a torrent downloading to the array at 50kb/sec does cause a slow down and vise versa.
The array consists of 8 WD 1.5TB drives, 6 attached to the ICH9R south bridge, and 2 attached to a si3132 based PCI express card. The array is formatted as a single ext4 partition.
Checking SMART data for all drives shows no errors. Testing read speed with hdparm reports what i would expect (100mb/sec for each drive, ~425mb/sec for the array).
The only thing i did notice is that udma6 is enabled for all the ICH9R drives while only udma5 is enabled for the si3132 drives. Write cache is enabled for all the disks. Attempting to set the si3132 drive to udma6 results in an IO error from hdparm.
The si3132 drive is using the sata_sil24 driver. Nothing of interest appears in the kern or syslog. During this time top shows very high wait time.
The s13132 controller appears to have the original firmware from 2006 loaded, there are some firmware updates available on the Silicon Image website for this controller that now appear to offer separate firmwares for RAID operation (some sort of hybrid controller/software thing the controller supports) and a separate firmware for standard IDE use.
Has anyone had similar issues with this controller? Is a firmware update a reasonable course of action? If so which firmware is best supported by the linux driver?
I know i'm not using its raid features but i've dealt with controllers that needed to be in raid mode for ahci to be active and for linux to work well with them. I'm bit ify at the idea of just trying it and finding out as it could knock 2 disks of my array out of action.
Using a fresh copy of server 10.04 im trying to simulate a failed raid array on a pair of 2tb disks. Here is the procedure i have been following so far:
- Remove the dead disk partitions from each of the raid 1 arrays (substitute the correct md devices and partitions) - mdadm /dev/md0 -r /dev/sdb2 - mdadm /dev/md1 -r /dev/sdb3
I get an error here that sfdisk does not support gpt (guid partition table). I thought sfdisk did support gpt? It says to use parted, but i cant find a command that copies a partition table over from another disk in parted documentation. Any suggestions? I suppose i could make the partitions manually, but im writing a procedure for people who arent that technical and i need it to be simple enough to be run in my absence. manually building the partitions would be too hard for them.
I'm running 10.04 x86 server with a really simple installation on a single 250GB boot disk. I then have a RAID5 array as /dev/md0 (set up using mdadm with x4 2TB disks). All is working well. My mdadm.conf file looks like this
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file.
if I was to lose the boot disk and need to remount the RAID array on a fresh installation, what steps do I need to go through. My assumption is that the superblocks on the RAID disks will be used and I don't need to keep any additional information - is this right?
Basically, I installed Debian Lenny creating two RAID 1 devices on two 1 TB disks during installation. /dev/md0 for swap and /dev/md1 for "/" I did not pay much attention, but it seemed to work fine at start - both raid devices were up early during boot, I think. After that I upgraded the system into testing which involved at least upgrading GRUB to 1.97 and compiling & installing a new 2.6.34 kernel ( udev refused to upgrade with old kernel ) Last part was a bit messy, but in the end I have it working.
Let me describe my HDDs setup: when I do "sudo fdisk -l" it gives me sda1,sda2 raid partitions on sda, sdb1,sdb2 raid partitions on sdb which are my two 1 TB drives and sdc1, sdc2, sdc5 for my 3rd 160GB drive I actually boot from ( I mean GRUB is installed there, and its chosen as boot device in BIOS ). The problem is that raid starts degraded every time ( starts with 1 out of 2 devices ). When doing " cat /proc/mdstat " I get "U_" statuses and 2nd devices is "removed" on both md devices.
I can successfully run partx -a sdb, which gives me sdb1 and sdb2 and then I readd those to raid devices using " sudo mdadm --add /dev/md0 /dev/sda1 ". After I read devices it syncs the disks and after about 3 hours I see fine status in mdstat. However when I reboot, it again starts with degraded array. I get a feeling that after I read the disk and sync array I need to update some configuration somewhere, I tried to " sudo mdadm --examine --scan " but its output is no different from my current /etc/mdadm/mdadm.conf even after I readd the disks and sync.
I've been having some problems w/ a my RAID 5 array, and after extensive investigation, I'm fairly sure that my last resort is rebuilding the array. I'd tried --assemble, b/c it's a previously created array, but it didn't seem to like that. So, I checked into --create, and it will re-create the array w/out destroying the data, if the superblocks are persistent, which they seem to be. However, here's what I get:
My question is: why do /dev/sdb1 and /dev/sdi1 show as both ext2fs and also as part of a RAID array?
I have a large RAID array of 12 TB attached to one of my Ubuntu server machines. The RAID volume is formatted with NTFS. The problem is that I can not mount this volume in Ubuntu. I can read it normally if I attach it to windows machine.This is the output from "sudo fdisk -l":
I have a 3ware controller that has a RAID 1 of two SATA disks.After an outage, the linux box (which is running ubuntu), restarted and the partition is now mounted read only. I only have the "/" mount point (this is a test server).Now, if I go to the 3ware controller by pressing ALT-3 while booting, I don't see any indication that there is something wrong with the disks.If I let the computer boot, I'm asked by fdisk if I want to fix/ignore/etc the inconsistencies found.
My system locked up while copying files last night. My RAID array will not start. I did verify my UUID's. (Lesson learned.) I do not understand a few things.1. Why do different drives show "active sync" on different drives? 2. Why does "Disk Utility" tell me the RAID is not running and when I try to assemble the RAID, mdadm returns: mdadm: device /dev/md0 already active - cannot assemble itWhen I try to start the RAID using "Disk Utility":
Code: Error assembling array: mdadm exited with exit code 1: mdadm: cannot open device /dev/sdd1: Device or resource busy mdadm: /dev/sdd1 has no superblock - assembly aborted So, I examine sdd1: Code: sudo mdadm -E /dev/sdd1
I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
I am using Ubuntu 10.04 x64. I am not trying to install Ubuntu on a RAID 1 drive like all of the guides are for. I have a RAID 1 array that I am using for data storage. In windows it shows as a single array just fine. In linux it shows as 2 separate drives. I don't care how they show up to be honest I just have to data written to one drive written to the other automatically as well so my RAID isn't screwed up. Looking through different articles and forums I find a lot of stuff saying that it should show up under /dev/mapper/dxxx or something under /dev/mapper. All that shows up there for me is a device called control which doesn't seem to do something.