Ubuntu Servers :: Reassembling Raid 5 With Missing Disk - No Superblock?
Apr 14, 2010
One of my raid disks (was a software raid5 with 4 drives) failured. I wanted to buy a new 1.5TB harddisk anyway so i i copied all the Raid data onto the new disk and disconnected the old ones. But then the new Disc crashed right before i could mirror it with another 1.5TB disk. So i need to reassemble my old Raid 5 now. I connected the drives in the former order except the first one, because this was the failed disk. The problem is, mdadm can't find the raid, no superblocks. Fdisk doesnt show mdraid superblocks, too. But fdisk has never showed superblocks. The thing is, this raid was crypted, but i crypted the /dev/md0, so this doesn't affect anything here, or?
some infos:
Code:
server:~# mdadm --misc -d /dev/sda
mdadm: option -d not valid in misc mode
server:~# mdadm --misc -t /dev/sda
[code]....
Disk /dev/sdc doesn't contain a valid partition table But fdisk always said there is no valid partition table and the raid was working.
View 4 Replies
ADVERTISEMENT
Nov 23, 2010
My raid array has failed. I have two disks /dev/sda and /dev/sdb./dev/sdb has failed and I could not rebuild the array(madm returned that the device is busy) so I rebooted the machine. After that, the whole sdb disk went missing, as it now only shows sda in fdisk -l.Did the disk went totally dead or my raid glitched?
View 8 Replies
View Related
Apr 5, 2011
I bought a disk to a friend that used it in a raid array, using the entire disk for the raid usage. To put that disk on service, i used dd-rescue to copy my old disk entirely, and managed to grow and setup a the partition table without losing any data. My last step was to create a RAID between my entire old disk, with a single partition and a partition of the same size on my new disk. I ran into some problems, but i manage to somehow fix it imperfectly, but now this setup is working properly. The problems (and imperfection) came from an issue it did not suspected : at some point, the original RAID superblock of the new disk, living in /dev/sda, resisted to dd-rescue, and so it is scanned by mdadm that tries, obviously unsuccessfully, to use it.
Partition layout :
Code:
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
[code]....
this setup is working properly besides this raid5 declared on sda, so that is shows up here and there. Since it is using the same device name that my other, proper raid setup, i don't know how to deactivate it since mdadm uses the /dev/mdx name to identify arrays.
View 4 Replies
View Related
Mar 16, 2011
I'm trying to reassemble a broken raid 5 array. The array has 3 drives. Here is some output from 'mdadm --examine': hutch:/home/sam # mdadm --examine /dev/sdb1 /dev/sdb1: - [URL]
View 1 Replies
View Related
Jun 28, 2010
I have an SiI hardware SATA RAID card, with two 500GB disks in mirrored RAID configuration. When I first plugged them in and set it up, things seemed to work ok, but on boot the raid controller told me that the RAID needed rebuilding, and it would happen automatically after POST. So I didn't worry about it, and the drive mounted fine, and it's been that way for years. I just went in and manually on-line rebuilt the RAID in the controller's BIOS, and now when I boot into Ubuntu, both disks show up in fdisk, but neither show up in /dev/disk/by-uuid. Am I missing something?
View 9 Replies
View Related
Nov 3, 2010
Purchased (4) 2TiB Drives (actual disk space) and created a RAID5 array expecting to have 6TB of useable disk space, however actual useable space is 5.46TiB.
So, the question is where did the disk space go?
First off, I can say for certainty the disks actual useable is verified at 2TB each have mounted and formated on a non-linux system (OSX).
Disks - 2TB Per disk, Tested HFS, Actual 2TB Useable
root@server:/server# fdisk -l 2>/dev/null | egrep "sd[hijk]" | grep Disk
Disk /dev/sdh: 2000.4 GB, 2000398934016 bytes
Disk /dev/sdj: 2000.4 GB, 2000398934016 bytes
Disk /dev/sdk: 2000.4 GB, 2000398934016 bytes
[Code]....
View 2 Replies
View Related
Mar 12, 2011
I'm testing my ability to recover a failed disk on a three disk software RAID 5 setup.
I have used a 10.04 alternate install disk to setup a three disk RAID 5 array according to this: [URL]. This is for a RAID1 setup. I followed it exactly except that I performed the steps on three drives rather than two and selected RAID5 instead of RAID1. Each disk is 500GB and has a 26 GB swap partition and the remaining space on each disk set as / with the boot flag on.
I installed the OS on my array and everything boots without a problem. After I booted up I started a terminal and ransudo dpkg-reconfigure mdadm to set the boot degraded to true and rebooted.
Next, I shut down the computer, disconnected the power on drive 1 (sdb) and then tried to boot. I get this (not verbatim):
Quote:
mdadm: CREATE user root not found
mdadm: CREATE group disk not found
raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2
mdadm: /dev/md0 has been started with 2 drives (out of 3)
[Code].....
*then a list of common problems and then:
ALERT! /dev/disk/by-uuid/bunchanumbersnad letters does not exist. Dropping to a shell
Then it dumps me to initramfs. MD0 is the swap partition. At this point I don't know what the heck to do. I'm skating on the edge of noobidity and this is pretty much over my head.
I want to use this server as a virtual machine server and the desired behavior would be that, if a hard drive should fail, the server would alert me via email and continue to run in a degraded state.
Is it even possible to install the OS on the array and run it degraded? Given the desired behavior, should I be looking at something other than RAID5? My client is broke so I'm trying to avoid a hardware RAID if I can do it.
View 1 Replies
View Related
May 5, 2011
My system locked up while copying files last night. My RAID array will not start. I did verify my UUID's. (Lesson learned.) I do not understand a few things.1. Why do different drives show "active sync" on different drives? 2. Why does "Disk Utility" tell me the RAID is not running and when I try to assemble the RAID, mdadm returns: mdadm: device /dev/md0 already active - cannot assemble itWhen I try to start the RAID using "Disk Utility":
Code:
Error assembling array: mdadm exited with exit code 1: mdadm: cannot open device /dev/sdd1: Device or resource busy
mdadm: /dev/sdd1 has no superblock - assembly aborted
So, I examine sdd1:
Code:
sudo mdadm -E /dev/sdd1
[Code]...
View 9 Replies
View Related
Sep 16, 2010
i trying to recreate(after zero-superblock) raid 10 but get stuck with near=1, far=2 layout, how i can setup this layout using mdadm.
View 2 Replies
View Related
Nov 20, 2010
I recently upgrade to Fedora 14 from 13. It was an in-place upgrade. I can't recall for sure, but I do believe I had these problems in F13 before the upgrade. The F13 install was from a Live CD. Anyway, I have a three drive RAID 5 array setup - 3x 750GB. For some very annoying reason, each time I reboot my F14 system, it hangs with an error about not being able to find a superblock on /dev/md126 and /dev/md127. I have tried to stop and remove /dev/md126 and /dev/md127 but they always seem to come back. I have also noticed in the output of fdisk -l that drives sda and sdd like to swap places sometimes for an unknown (to me) reason. Any other output that is needed, please ask. I recreated the array just yesterday with:
Code:
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
I would cat mdadm.conf in /etc, but I removed it previously to try to figure out the problem and it was not
[code]....
View 1 Replies
View Related
Jun 7, 2010
I'm looking to set up a bit of a home server, and am wondering about storage. What I'd like is something like RAID 6, which has good redundancy built-in, but with this being a home server, I'd prefer to start a little smaller and leave room to build it up in future. I'd been looking at commercial products like the 'drobo', which seems fairly ideal, but I'd really like to see if I can do it myself. I understand that throwing the RAID into an LVM will allow for some expansion, but the last time I checked, most RAID setups called for the same sized disks, or at least limited the array by the size of the smallest disk present.
What I'd like is the ability to build a basic framework with a few cheap disks, and then as things start filling up, to be able to add larger ones (perhaps eventually pulling out smaller ones as though they'd failed and replacing them with big ones)
View 1 Replies
View Related
Jul 25, 2011
On a freshly installed Ubuntu server 11.04 I noticed that the /dev/disk/by-path directory (and of course it's contents) are missing:
Code:
root@xfiles2:~# ls -lA /dev/disk/
total 0
[code]....
View 1 Replies
View Related
Jan 9, 2011
I've got a raid5 array of 4 disks with ubuntu 8.04 runing on it that is currently still working:
/dev/sda
/dev/sdb
/dev/sdc
/dev/sdd
Smartmontools for /dev/sdc tell that there are 9 sectors pending for reallocation:
Code:
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 9
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 9
And /dev/sdd has increasing number of reallocated sectors (about 1 every couple of minutes):
Code:
5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 1735
/dev/sdc has failed a coulple of times this week (but I have always sucessfully readded it to raid5) . But the increasing number of reallocated sectores on /dev/sdd concerns me even more.
I'm affraid that during removal of /dev/sdd and adding new /devs/sdd disk, raid might fall appart. That's why I would try to do it in Ubuntu Live CD:If the raid falls appart (/dev/sdc fails) during the readding of new /dev/sdd disk, I might still remove the new /dev/sdd and return the previous one and assemble the raid with:
/dev/sda
/dev/sdb
/dev/sdd (old one that was previously removed)
Does assembling Raid in Ubuntu Live and adding new disk for /dev/sdd write anything on /dev/sda, /dev/sdb and /dev/sdc in the process of adding /dev/sdd into raid5?
View 2 Replies
View Related
Feb 1, 2011
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
I was able to examine the disks though:
Code:
root@127.0.0.1:/etc# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
code....
Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
View 4 Replies
View Related
Jul 23, 2010
I decided I needed to expand my openSUSE partition because I was running out of room. There was some unallocated space following my /home and swap partitions that I wanted to assign to my / partition.
So after taking note that bad things could happen (as I've read everywhere), I got myself a copy of GParted (since Yast! partitioner doesn't move partitions, does it?) to start getting the thing to work. Well, actually, I got PartedMagic which has GParted (I realize it could be a problem NOW if they didn't have the latest version).
So the first step was to move the swap partition. I decided to do it one step at a time and only moved the swap partition to the end. That turned out fine...no problems whatsoever.
The next step is where the problem came in. It was moving the /home partition. It took a while to move but once it got to the end, it showed a message that was similar in nature to "error detected" without giving a long list of error messages. If it did, I probably would have copied it down. There wasn't anything in the "logs" either in GParted besides what it showed. Looking back, I probably should've went into /vars/logs to get additional log information but I guess I wasn't smart enough to do so (provided that GParted does leave log messages there which I think it does).
After that, it refreshed the drive and it ended up showing no partitions with an error and saying the only thing it can do is create a new partition table. After that, openSUSE wouldn't boot. After loading, (both normal and failsafe modes) it gives me the message in the link at the bottom. I can still access those partitions fine. Nothing's corrupted. Windows also boots up fine (it's on a different drive though) and reads the affected drive fine. Linux-based LiveCDs (including openSUSE 11.3) reads the partitions fine too.
I've tried using e2fsck on my ext4 partitions with commands I found during a search and they seemed to "fix" those partitions but it still won't boot and gives me the same message. Looking at it carefully, it seems the reason it can't find a superblock is because it can't find part8 of whatever that thing under /dev/disk/by-id/ is.
I would very much prefer a failsafe (or at least mostly failsafe) solution that won't (or is unlikely) to result in requiring the restoration of a backup but I do understand that there is always the chance of something going wrong that will kill everything. Considering that I can still access the data on that drive, I don't believe the data is corrupted. Maybe the drive itself (as in whatever signatures it may leave) but not the filesystems.
This is what I end up with trying to boot into openSUSE (sorry, I don't know how to get the log when it doesn't put it in logs directory: https://docs.google.com/leaf?id=0B3A...ut=list&num=50
View 9 Replies
View Related
Jun 16, 2010
I have a x64 OpenSUSE server with two hard drivers installed. The first one is used for the / and /home partitions and the other is for backups. Ironically enough it is the backup hard drive I am having trouble with. I was having trouble writting to the drive and unmounted it to preform a fchsk, however now when ever I try to mount it I get the following error:
mount: wrong fs type, bad option, bad superblock on /dev/sda1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
Does anyone know who I can repair the drive and retrive data?
View 1 Replies
View Related
May 14, 2011
After several crashes during videos it seemed like a good idea to fsck root. Downloaded the latest systemrescuecd and ran it at boot. The error message was 'bad magic number, corrupt superblock' with a suggested command to try another superblock. That failed with the same message. Tried tune2fs to force fsck at boot and got the same message. The drive is less than 6 months old and the installed system is working more or less ok. The command I used was 'fsck.ext4 /dev/sdc2'. What am I doing wrong?
View 9 Replies
View Related
Mar 24, 2011
This might take a little while to explain so please bear with me. I just bought a HP N36L microserver to replace my Netgear ReadyNAS as it has 4 bays as opposed to the 2 in the Netgear, the HP came with a 250GB Seagate drive in so I installed Ubuntu 10.04 server onto that as a minimal install in a 5GB partition, selected basic server, lamp and samba during the setup and then once installed added the minimal gui, all good so far.
The idea was to have 3 1TB drives in the HP running software Raid 5 and after trying to do this via the disk utility gui and having misaligned cylinders I found this guide for setting up the Raid 5 array - http://www.jamierf.co.uk/2009/11/04/...n-ubuntu-9-10/ Now the problem with setting up the raid was I had just over 900GB of data on the 2 disks in the Netgear and not enough space anywhere to back it all up so the plan was to install 1 x 1TB drive and format to ext4 and transfer all the data onto it from the Netgear, then pull the 2 drives from the Netgear and build the Raid 5 array using just those 2, then copy the data from the 3rd drive onto the raid and then add the 3rd drive and grow the array.
This actually was all going fine, 2 disk array was built, data transferred and all was good so added the 3rd drive to the array and issued the grow command, after many hours this completed and I had a 2.0TB filesystem icon on the desktop.
Now the problem, the last two steps were to unmount the array and run an fsck on it (as per the guide) and then grow the filesystem but the array wouldn't unmount it kept saying it was busy, I tried to force it but still nothing so I decided to reboot (my windows background coming back to haunt me perhaps) when the box came back up it failed to mount the array due to a bad superblock
[Code]...
View 9 Replies
View Related
Sep 27, 2010
I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).
The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:
160 + 250 + 250+ 750 + 250 +200 + 200 + 250 + 320 + 250 + 320 = 3.2TB
Am I missing something or making a false assumption somewhere?
View 4 Replies
View Related
Aug 1, 2010
I had done a new lucid install to a 1 TB RAID 1 array using the alternate CD a few weeks back. I messed up that system trying to some hardware working that lucid doesn't have drivers for yet, so I gave up on it and reinstalled to a single 80 GB disk that I now want to move over to the RAID array.
I moved all of the existing files on the array to a single folder, then copied all of the folders from the 80 GB disk over to the array with permissions and symlinks (minus the contents of /proc and /sys, which I created empty).
These are the commands I used:
Quote:
p -a -d -R -v -t /media/raid_array /b*
cp -a -d -R -v -t /media/raid_array /d*
cp -a -d -R -v -t /media/raid_array /e*
cp -a -d -R -v -t /media/raid_array /h*
[Code]....
I tried to change fstab to use the 689a... for root, but when I try to boot, it's still trying to open /dev/disk/by-uuid/412d...
So then I booted from the single disk again and chrooted into the array, then ran update-initramfs -u. I got 3 "grep: /proc/modules: No such file or directory" errors, and "cat: /proc/cmdline: No such file or directory"- so I created directory /proc/modules, created an empty file /proc/cmdline, and ran the initramfs update again. Then I tried to shut down, which hung (probably because I was doing all of this from a terminal window in Gnome), so I killed the power after a couple of minutes.
It's still trying to use /dev/disk/by-uuid/412d... to boot.
What am I missing? I assume I just have to change the UUID to mount as root, but I don't know how.
View 2 Replies
View Related
Oct 16, 2009
my Fedora 11 system is not starting anylonger. It stops with the message:
Code:
VFS: Can't find ext4 filesystem on dev dm-0
The system told me since a while, that a lot of the sectors of one disk of the (software) RAID compound are failed already. So tried to disconnect each of the disks and start them separately. Unfortunaltly this is not working (for one its is not working at all, the other wents the same far as with both), when I tried to recover the system with the Fedora DVD, it said no distribution found. I am quite new and do not know so much about linux system, so i do not know what further information you could need. Maybe it can be important, that both disks are encryped (the system wents so far, that I can type in the password).
View 2 Replies
View Related
Jun 28, 2011
Im trying to fix a disk which is inaccessible.
Its using LVM2
From a rescue disk I can scan and see the LVM device fine.
When I try to mount it fails.
When I run e2fsck it says "recovering journal" and then fails with "unable to set superblock flags"
I have tried running e2fsck -b 32768 (and other backup blocks) and it just says "recovering journal" and then fails with "unable to set superblock flags"
View 4 Replies
View Related
Jul 17, 2010
point me in the direction to get a step by step guide to setting up a Raid 5 using the Disk Utility and 3 spare drives? I have the main OS files on a 80gig drive and I would like to mount the 3 drives as Raid 5.Just shooting in the dark now.. Screen shot is attached. [URL]...
View 1 Replies
View Related
Jul 20, 2011
Has anybody ever used Disk Utility to set up software RAID? Here I am running terminal commands (I'm a terminal junkie) and I just happen to stumble across instructions that indicate "Or you can just set it up through Disk Utility."
Sure enough in disk utility, it looks like all of the configurable options are there. It makes me wonder, though... is this kind of GUI functionality something that isn't really solid? Or does it operate predictably and effectively?
View 4 Replies
View Related
Mar 12, 2010
I had ubu 904 and vista installed on an 80gb drive, i had a spare 80gb drive also. I setup a raid0 config in my bios, then installed ubu9.10 onto it. All was fine until the very end, and then it said grub failed to install.
So i rebooted, and im left with a blinking cursor. How do i install grub? Ive installed ubu a few times now and never had an issue so now im lost.
View 1 Replies
View Related
Feb 2, 2010
Recently, one the SMART utility said that one of the drives had failed and another drive was about to fail. I downed the box and hooked them up to my windows machine to run sea tools on them (They are all seagate drives). Sea Tools said that the drives were fine, while ubuntu said they were failing/dead. Yesterday I decided to try to fix one of the drives in the raid. I turned the server off, took the failed drive out, and restarted. Of course the raid didn't work because only 2 of the 3 drives were there, however it had been working w/ only 2 of the 3 drives for a couple months now (I'm a lazy college student). I turned it back off and back on with the drive there just to see if I could get the raid up again, but I havn't been able to get it to go. So far I've tried:
Code:
mdadm --assemble /dev/md0 /dev/sd[b,c,d]
mdadm: no recogniseable superblock on /dev/sdb
mdadm: /dev/sdb has no superblock - assembly aborted
[code]....
I'm looking for a way to trick the raid into working with just 2 drives until I can warranty the seagate and buy an external 1.5 TB drive to use as another backup. how to remove the bad drive from the array and replace it with a fresh drive, without data loss.
View 3 Replies
View Related
Feb 25, 2010
i am currently trying to do software raid 1 on a running ubuntu 9.10 system with mdadm. I might have done something wrong and im trying to go back from the beginning. Does anyone know how to remove all the raid info from a harddisk and get it back to its original state.
View 2 Replies
View Related
Mar 10, 2010
I want to install Ubuntu x86_64 or x86 to my computer.
I used Dekstop and Server Editions on other machines, installed succesfully but i could not install Ubuntu to my computer.
My hardwares are;
AMD Phenom II X4
Gigabyte GA-MA790GP-DS4h [SB750 - AMD AHCI Compatible RAID Controller]
2 x 250GB Seagate ST3250410AS @ Raid0
I installed Windows succesfully and i created 50GB partition for Ubuntu.
I tried to install Ubuntu, but disks are not detected in partition managing screen.
how can i install ubuntu?
View 5 Replies
View Related
Mar 24, 2010
My system is installed on my main hd. Is possibile, if i buy a new hd, to setup a Software RAID, so with old and new hd without reinstall ubuntu?
View 2 Replies
View Related
Jun 18, 2010
I have a fileserver which is running Ubuntu Server 6.10. I had a RAID5 array consisting of the following disks:
Code:
/dev/sda1
/dev/sdb1
/dev/sdd1
/dev/md0 -
the raid drive for the above three disks. The sda1 disk has failed and the array is running on 2 of 3 disks
/dev/sdc (OS disk)
/dev/sde (new 2tb disk - unused)
/dev/sdf (new 2tb disk - unused)
My plan was to rebuild the array using the two new disks as RAID1. Would the best way to do this be to create a new RAID1 disk on /dev/md1 then copy all data over from /dev/md0? Also - this may sound stupid but since all 3 drives in md0 are identical i'm not sure physically which disk is bad. I tried disconnecting each disk one-by-one then rebooting but the system doesn't appear to want to boot without the bad drive connected. I've already failed the disk in the array with mdadm but i'm unsure of how to remove it properly.
View 3 Replies
View Related