Ubuntu Servers :: Raid Array Incorrectly Assembled On Boot

Feb 20, 2011

I've got a couple of new hard disks that I have partitioned (3 partitions per disk) and set up in a mirrored software raid array using mdadm. They've synced, I've put file systems on them (1 x ext4, 2 x luks + ext4) and I can mount them. I've checked the partitions using fdisk. I've checked the filesystems using fsck. So far so good. Next step is that I'd like mdadm to automatically assemble them on boot. (Not bothered about mounting and crypttabing yet.)

I've used sudo /usr/share/mdadm/mkconf to generate a new mdadm.conf with the appropriate UUIDs for the new partitions. I've checked that this matches the output of sudo mdadm --detail --scan

The new lines in this file are:

ARRAY /dev/md9 level=raid1 num-devices=2 UUID=470fb8a6:45561fe0:ebda4a02:9ba7a1ed
ARRAY /dev/md10 level=raid1 num-devices=2 UUID=f351fbba:c704a4b2:ebda4a02:9ba7a1ed
ARRAY /dev/md8 level=raid1 num-devices=2 UUID=c6ccec17:2274588e:ebda4a02:9ba7a1ed

To check that the mdadm.conf is fine I have stopped the new arrays:

[Code].....

View 7 Replies


ADVERTISEMENT

Server :: Raid 10 Not Assembling Mdadm Assembled From 2 Drives - Not Enough To Start The Array?

Feb 20, 2011

This is message I get when I try and start itmdadm: /dev/md0 assembled from 2 drives - not enough to start the arrayBelow is the information I've collected about any help on how I can get the raid back up and going to I can get the data off of it would be awesome

sudo mdadm --examine --scan -v
ARRAY /dev/md0 level=raid10 num-devices=4 UUID=91c36708:a7cbb532:5b51dc92:ba008491
devices=/dev/sdd1,/dev/sdc1,/dev/sdb1,/dev/sda1

[code]...

View 1 Replies View Related

Ubuntu Servers :: Creation Of RAID-0 Array In Disk Utility Resulting In Smaller Than Expected Array?

Sep 27, 2010

I have a NETGEAR ReadyNAS NV+ with four 1TB drives in a RAID-5 array. This is our primary file storage. This has previously been backed up to a hardware RAID-0 array directly attached to our Windows server. The capacity of this backup array is no longer sufficient. So the plan was, take a bunch of 200GB to 320GB drives (And a 750) I had kicking around, chuck them in a couple of old SCSI drive enclosures I have collecting dust, attach them via IDA/SATA-to-USB adaptors to a USB hub, attach that to the server, create a JBOD array spanning the disks, and back up the NAS to that. Performance is not an issue as this is just to be used for backup, with the idea being as near to zero cost as possible (Spend so far = NZ$100�ish).

The first hurdle I struck was Windows not supporting Dynamic Disks on USB drives (Required to create a spanned volume). At first I resisted using another machine (i.e. a machine running Ubuntu) as I didn't want to dedicate a piece of hardware to backing up the NAS. I then decided it would be acceptable to do this via a VM, which is what I've done.So I have 10.04 running under VMWare Server 2.0.2 under Windows Server 2008 R2. The disks are all presented to the VM. I wasn't sure if I was going to end up creating the array under LVM or something else, but I noticed Disk Utility has an option to create an array, so I tried that. When I add two 250GB drives, the array size is 500GB. When I then add a 160GB drive, the array size drops to 480GB. Huh? If I keep adding disks (Regardless of order) the final array size comes out at 1.8 TB, as per the attached screenshot. Now with the following drives, I expected something more like:

160 + 250 + 250+ 750 + 250 +200 + 200 + 250 + 320 + 250 + 320 = 3.2TB

Am I missing something or making a false assumption somewhere?

View 4 Replies View Related

Hardware :: Determine The UUID Of 2 Disks That Are Assembled In A RAID1 Array?

Feb 17, 2011

I just experienced a HDD failure and while reorganizing the drives inthis machine I realized the benefits of UUID instead of /dev/sdX nomenclature. I am trying to determine the UUID of 2 disks that are assembled in a RAID1 array. right now they are /dev/sde & /dev/sdf with each only one partition. I tried ls -l /dev/disk/by-uuid but I get only the UUID of other disks, not the ones currently ID'd as sde & sdf. my mdadm.conf assembles several raid arrays all by UUID, but somehow, I cant recall how I got the UUIDs of the other HDDs at first...

View 14 Replies View Related

Ubuntu :: Growing Raid Crashed Now Can't Get Assembled

Aug 3, 2010

I was growing a 3 disk raid 5 to a 4 disk raid 6.I used mdadm --grow /dev/md0 --level=6 --raid-devices=4 backup- file=/home/user/Desktop/md0.backup for the grow.The drive I added was /dev/sde1. Well it was going strong for like 2-3 days and last time I checked it was at 73% rebuild.Well the frikin power went out on me and when I came back home I can't get the array assembled.

I just have a fear that i lost my superblocks but I'm also a moron when it comes to this stuff.. I would like to think not, but I'm sure there are much smarter people than I on this issue. Any help would help cause of course I have a lot of stuff on my raid with 3 2tb drives.. the irony is I was converting to raid 6 to make sure I had really good safety net.

View 4 Replies View Related

Ubuntu Servers :: Partitioning >2TB RAID Array?

Mar 26, 2011

I have an Areca hardware RAID array that I'm trying to format & partition on a fresh Ubuntu 10.04 LTS installation. The OS drive is not on the RAID card, it's entirely separate. The RAID is a 6TB volume so I realize I have to use parted to format it, not fdisk (which I've always relied on).

My problem is that I can't figure out how to get parted to like my settings. It seems like everything I try gives me the warning "Warning: The resulting partition is not properly aligned for best performance." Here's what I'm doing:

Code:
(parted) p
Model: Areca ARC-1280-VOL#00 (scsi)[code].....

What start/end settings should I use to get a properly aligned partition? How do I know?I have tried a mix and match of 0, 0s, 1, 1s, -0, -0s, -1, -1s, 100% for my start/end with no success.

View 8 Replies View Related

Ubuntu Servers :: Lost Partiontable On RAID 1 Array?

Jun 4, 2010

I just restarted my server (Ubuntu 9.04 server, running on ESXi 4.0) and while copying files onto the server using samba I got strange problems and the connection was lost. When I rebooted the total system, so ESXi as well as Ubuntu Server I did find problems on my RAID disk.

The directory, where the new files were added I have a lot of files, but a lot of them do not have any info except their name:

1304 -rw-rw-rw- 1 spoorhobby spoorhobby 1327274 2010-05-15 22:10 DSCF1895.JPG
? -????????? ? ? ? ? ? DSCF1896.JPG
? -????????? ? ? ? ? ? DSCF1897.JPG
? -????????? ? ? ? ? ? DSCF1898.JPG

[Code].....

Both mirror disks are still functioning and I can still add/delete files, from the server, from other LINUX systems and from other Windows systems via samba.

I did make a full backup on a different server.

View 9 Replies View Related

Ubuntu Servers :: Starting Degraded Raid 5 Array?

Jun 11, 2010

so my servers 7 hds in raid 5 all was working well until one of them died. The HD that died sort of works it can read like half a file also freezes on the benchmark test in disk utility. Unfortunate when i take it out on boot it says. The drive for /media_kbt is not ready or present press s to skip or m for manual recovery. I hit s and then go to disk utility. But i can't start or add disks to the array.

Here is me trying to do random stuff

Code:
administrator@3dslice-host:~$ sudo mdadm --stop /dev/md0
[sudo] password for administrator:
mdadm: metadata format 00.90 unknown, ignored.
mdadm: stopped /dev/md0
administrator@3dslice-host:~$ sudo mdadm --add /dev/md0 /dev/sda1
mdadm: metadata format 00.90 unknown, ignored.

[Code]...

View 2 Replies View Related

Ubuntu Servers :: WRITE Performance Down On RAID 1 Array

Sep 7, 2010

I'm currently experiencing some serious issues with WRITE performance on a RAID-1 array. I'm running Ubuntu 10.04 64 bit server with the latest updates. To evaluate the performance ran the following test: [URL]... (great article btw!) Using dd to measure, write performance is only at 8.7 MB/s. Read is great though at 74.5 MB/s. The tests were ran straight after rebooting and I have not (YET!) done any kernel tuning or customization, running the default server package of the Ubuntu kernel. Here's the motherboard in the server: [URL]... with a beta bios to support drives over 300GB.

[code]...

As you can see from the bo column there is definitely something stalling. As per top output, the %wa (waiting for i/o) is always around %75 however as per above, writes are stalling. CPU is basically idle all the time. Hard drives are quite new and smartctl (smartmontools) does not detect any faults.

View 4 Replies View Related

Ubuntu Servers :: RAID Array Not Mounting Correctly

Jun 6, 2011

I have an ubuntu 10.04 machine that I use primarily as a file server. I have a RAID5 array built with mdadm from 3 component disks that worked properly until a recent upgrade (I'm not sure exactly what broke it though). The array is /dev/md0 and is set to mount at /var/media on bootup. *Now*, when the system cold boots it hangs partway through the bootup sequence and throws the following error:

The disk drive for /var/media is not ready yet Press S to skip ... Once I "S"kip this manually, I can see that LOWER in the boot sequence mdadm gets called and assembles the drive, and once fully booted into the system I can then simply do a "mount -a" and the array mounts properly. SO... my gut feeling is that some portion of one of the upgrades changed the order in which things are called, and now the "mdadm assemble" is not triggered until AFTER the system tries to mount the drives. My problem is that I don't know the stuff that controls the boot sequence well enough to dig in the right place.

As a workaround I can remove that entry from /etc/fstab, but then (of course) the system won't auto-mount the array. It's better than the boot process completely hanging because as least THIS I can fix remotely, but I'd really like to know

1) why this broke in an upgrade and is it a known problem?
2) how to get it back to where it auto-assembles and then auto-mounts the array on bootup.

View 9 Replies View Related

Ubuntu Servers :: Monitor A Buffalo Raid Array?

Jun 6, 2011

I have 10.04 server with a linkstation raid 5 attached via usb. What is the best way to monitor the drives for a failure? Its at a remote site

View 2 Replies View Related

Ubuntu Servers :: RAID 5 Software Array With 3TB Drives

Jun 15, 2011

I am trying to use 3 3TB Western Digital drives in a raid 5 software array. The trouble seems to be that the array is created with only 1.5 TB of capacity, rather then the expected 6 TB.

Here are the commands and output:
$ sudo dmraid -f isw -C BackupFull6 --type 5 --disk /dev/sde,/dev/sdf,/dev/sdg --size=5589G
Create a RAID set with ISW metadata format
RAID name: BackupFull6
RAID type: RAID5
RAID size: 5589G (11720982528 blocks)
RAID strip: 64k (128 blocks)
DISKS: /dev/sde, /dev/sdf, /dev/sdg
About to create a RAID set with the above settings. Continue ? [y/n] :y

$ sudo dmraid -s
*** Group superset isw_cdjhcaegij
--> Subset
name: isw_cdjhcaegij_BackupFull6
size : 3131048448
stride : 128
type : raid5_la
status : ok
subsets: 0
devs : 3
spares : 0

So I cannot understand why the size of the created array is only 3131048448 or about 1.5 TB. The first command seemed to imply it was going to create an array with 5589GB.

System is:
Description: Ubuntu 10.04.2 LTS
Release: 10.04
Codename: lucid

View 8 Replies View Related

Ubuntu Servers :: RAID-6 Cannot Start Degraded Array

Jun 26, 2011

Ubuntu Server 11.04 i386. I've used linux on and off for years but only in small doses, so I'm really just at newbie level. I was running an Openfiler NAS, but decided to give Ubuntu+Webmin a try. And up 'til now I've been happy with progress. I have set up a RAID-6 array using 5 x 1TB SATA drives. I've ensured that the array is in a "clean" state, and now I want to do some failure testing. The problem occurs when I remove one of the drives in the array. I shutdown, remove a drive, then boot up. The array wont start at all, and comes up with this error during boot:

Quote:

the disk drive for /mnt/raidvol1 is not ready yet or not present
Continue to wait; or Press S to skip mounting or M for manual recovery

If I wait, nothing happens. Obviously the RAID array should start in degraded mode, but it fails to mount at all. When I press "M" to go into manual recovery and type "mount -a" I get the response:

Quote:

mount: special device /dev/RAIDVG1/RAIDLV1 does not exist

I have set BOOT_DEGRADED=true in /etc/initramfs-tools/conf.d/mdadm without success. If I reconnect the disconnected drive, the array works fine, and is in a clean state.

View 9 Replies View Related

Ubuntu Servers :: Make A Service To Depend On A RAID Array?

Mar 7, 2011

Short story: I have a problem with one of my services (mediatomb) - it requires an md RAID array to be mounted in order to start, because it uses files from it. $remote_fs is added by default to the "Required-Start" line of the init script, so I thought that this should be enough. However, the mediatomb service fails to start on boot, but starts just fine when I execute "service mediatomb start" later. The array is entered in /etc/fstab and is automatically mounted on boot.

Long story...

This is my file server (Ubuntu Server 10.10), which has a raid array created with mdadm (mounted on /z), and the root filesystem is located on an USB thumb drive. I've installed mediatomb, but I wanted to put its database files on the raid array instead of the root fs, so I've symlinked /var/lib/mediatomb (the default path) to /z/mediatomb on the array. This is because the mediatomb DB is supposed to be updated fairly often, so I didn't want it to stay on the flash drive.

Problem is, the mediatomb service can't start on boot - in /var/log/mediatomb.log, it says "2011-03-07 19:22:47 ERROR: /var/lib/mediatomb : 20 x No such file or directory". As I said, it works fine when manually started later...

This is the fstab entry for the raid array code...

View 1 Replies View Related

Ubuntu Servers :: Mdadm RAID 6 Array With Si 3132 SATA Controller ?

Mar 12, 2010

I've recently started having an issue with an mdadm RAID 6 array that been operational for about 2500 hours.

Intermittently during write operations the array stalls, dropping to almost 0 write speed for 10-30 seconds. When this occur one or both of the 2 drives attached to a 2 port Silicon Image si3132 SATA-II controller "locks up" with its activity light locked on. This just started occurring within the last week and didn't seem to coincide with any update that i noticed. The array has just recently passed 12.5% full. The size of the write does not seem to make any difference and it seems completely random. Some times copying a 5 GB dataset results in no slow down other times a torrent downloading to the array at 50kb/sec does cause a slow down and vise versa.

The array consists of 8 WD 1.5TB drives, 6 attached to the ICH9R south bridge, and 2 attached to a si3132 based PCI express card. The array is formatted as a single ext4 partition.

Checking SMART data for all drives shows no errors. Testing read speed with hdparm reports what i would expect (100mb/sec for each drive, ~425mb/sec for the array).

The only thing i did notice is that udma6 is enabled for all the ICH9R drives while only udma5 is enabled for the si3132 drives. Write cache is enabled for all the disks. Attempting to set the si3132 drive to udma6 results in an IO error from hdparm.

The si3132 drive is using the sata_sil24 driver. Nothing of interest appears in the kern or syslog. During this time top shows very high wait time.

The s13132 controller appears to have the original firmware from 2006 loaded, there are some firmware updates available on the Silicon Image website for this controller that now appear to offer separate firmwares for RAID operation (some sort of hybrid controller/software thing the controller supports) and a separate firmware for standard IDE use.

Has anyone had similar issues with this controller? Is a firmware update a reasonable course of action? If so which firmware is best supported by the linux driver?

I know i'm not using its raid features but i've dealt with controllers that needed to be in raid mode for ahci to be active and for linux to work well with them. I'm bit ify at the idea of just trying it and finding out as it could knock 2 disks of my array out of action.

View 2 Replies View Related

Ubuntu Servers :: Simulate A Failed Raid Array On A Pair Of 2tb Disks?

Feb 26, 2011

Using a fresh copy of server 10.04 im trying to simulate a failed raid array on a pair of 2tb disks. Here is the procedure i have been following so far:

- Remove the dead disk partitions from each of the raid 1 arrays (substitute the correct md devices and partitions)
- mdadm /dev/md0 -r /dev/sdb2
- mdadm /dev/md1 -r /dev/sdb3

[code]....

I get an error here that sfdisk does not support gpt (guid partition table). I thought sfdisk did support gpt? It says to use parted, but i cant find a command that copies a partition table over from another disk in parted documentation. Any suggestions? I suppose i could make the partitions manually, but im writing a procedure for people who arent that technical and i need it to be simple enough to be run in my absence. manually building the partitions would be too hard for them.

View 2 Replies View Related

Ubuntu Servers :: Mounting Existing RAID Array On Fresh Installation?

Aug 1, 2011

I'm running 10.04 x86 server with a really simple installation on a single 250GB boot disk. I then have a RAID5 array as /dev/md0 (set up using mdadm with x4 2TB disks). All is working well. My mdadm.conf file looks like this

Code:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.

[code]....

if I was to lose the boot disk and need to remount the RAID array on a fresh installation, what steps do I need to go through. My assumption is that the superblocks on the RAID disks will be used and I don't need to keep any additional information - is this right?

View 6 Replies View Related

Ubuntu Installation :: RAID Array Will Not Boot (x64)?

Sep 15, 2010

I am using the 10.04.1 x64 Kubuntu live CD to install Kubuntu on my FakeRAID 0 array, I tell it not to install grub as i know it is still currently broken. the install goes flawlessly. However on first boot using my live grub CD unless i tell my computer to point to the CD it will hang (which it is told to boot from CD first so i'm not sure why it does.) When i tell it to boot to Linux, it will not boot saying the kernel is missing files (to much to sadly list, all i do not understand) then offers me a terminal to input "help" into for a list of Linux commands. Windows 7 pro x64 works just fine CD was downloaded VIA P2P if it matters

View 6 Replies View Related

Ubuntu Servers :: Mdadm - Why /dev/sdb1 And /dev/sdi1 Show As Both Ext2fs And Also As Part Of A RAID Array

May 31, 2011

I've been having some problems w/ a my RAID 5 array, and after extensive investigation, I'm fairly sure that my last resort is rebuilding the array. I'd tried --assemble, b/c it's a previously created array, but it didn't seem to like that. So, I checked into --create, and it will re-create the array w/out destroying the data, if the superblocks are persistent, which they seem to be. However, here's what I get:

[Code]....

My question is: why do /dev/sdb1 and /dev/sdi1 show as both ext2fs and also as part of a RAID array?

View 3 Replies View Related

Debian Installation :: No Boot From RAID Array?

Dec 22, 2010

I installed Debian 5.0.3 (Backport with .34 Kernel), because my server hardware (Dell PowerEdge R210) needs special firmware and drivers.However, the installation went quite smooth.I put the system on a RAID 1 Array with about 500 GB space.s I said the installation went well, however, it doesn't boot! No GRUB, nothing

View 4 Replies View Related

Software :: RAID Array Broken On Boot?

Jan 18, 2010

I have an issue with a RAID array failing on boot. Seems like an issue with the file system. I get passed the RAID BIOS (and from what I can see, it looks alright there, all devices appear), but then the following error messages appear:

Code:

raid5: failed to run raid set md0
mdadm: failed to RUN_ARRAY /dev/md0 input/output error
mdadm: Not enough devices to start the array

and further down:

Code:

fsck.ext3: Invalid argument when trying to open /dev/md0
/dev/md0:
The superblock could not be read or does not describe a correct ext2

[code]....

I then login with root password to get a "Repair file system" prompt. Tried with fsck, but not working. It's 4x1TB in RAID5 on HighPoint RocketRAID 2300 4P SATA II/300, with Fedora 9. Not sure what other system info might be needed.

View 5 Replies View Related

Slackware :: RAID Array Not Detected On Boot?

Aug 16, 2010

I have a raid array level 5 with metadata 1.2 made with mdadm. I put it on /etc/fstab to mount it on boot but it doesn't works because the raid is not detected on boot. I have a /etc/mdadm.conf like this:

Code:

ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 name=0 UUID=afdfe00e:0d18a5eb:29aa54f9:8b422ee0

Just another thing... After the command

Code:

mdadm --detail --scan >> /etc/mdadm.conf

The mdadm.conf is like this:

Code:

ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.02 name=0 UUID=afdfe00e:0d18a5eb:29aa54f9:8b422ee0

But I change manually the metadata version because the 1.02 give me a error. I don't know if it is a bug or what! Beside this. I have to put a line in /etc/rc.d/rc.local to assemble the array.

Code:

mdadm --assemble --scan --uuid=afdfe00e:0d18a5eb:29aa54f9:8b422ee0

And after that I already can mount it. Why the array is not detected on boot? Is because metadata type is prior to 1.00? Can I put the line I have on /etc/rc.d/rc.local to assemble the array in another file, that will be executed before /etc/fstab?

View 5 Replies View Related

Ubuntu :: Unable To Boot Due To Changed UUID In RAID Array

May 9, 2010

I'm running 64bit Lucid. I've recently had a severe problem with my softraid (5) array, and have had to recreate the array to fix it. However this now means that something is up with GRUB/initramfs, and booting times out while waiting for the root device (md0) to be ready. /boot is on a normal partition, not the raid array itself. A friend of mine has rebuilt my initramfs file with the new UUID, but now I get the message: 'Kernel panic not syncing: VFS: unable to mount root fs on unknown-block (9,0)'.So my question is either how do I sort this error, OR how do I rebuild initramfs/grub in a way that will boot?

View 6 Replies View Related

Ubuntu Servers :: Mounting Large (12TB) SCSI Attached RAID Array (formatted With Ntfs) ?

Feb 16, 2010

I have a large RAID array of 12 TB attached to one of my Ubuntu server machines. The RAID volume is formatted with NTFS. The problem is that I can not mount this volume in Ubuntu. I can read it normally if I attach it to windows machine.This is the output from "sudo fdisk -l":

sudo fdisk -l
Disk /dev/sda: 164.7 GB, 164696555520 bytes
255 heads, 63 sectors/track, 20023 cylinders[code]........

View 2 Replies View Related

Software :: RAID 5 Array Not Assembling All 3 Devices On Boot Using MDADM - One Is Degraded

Aug 31, 2010

I have been having this problem for the past couple days and have done my best to solve it, but to no avail. I am using mdadm, which I'm not the most experienced in, to make a raid5 array using three separate disks (dev/sda, dev/sdc, dev/sdd). For some reason not all three drives are being assembled at boot, but I can add the missing array without any problems later, its just that this takes hours to sync. Here is some information:

[Code]....

View 11 Replies View Related

Ubuntu Installation :: Dual-boot On A RAID-0 Array - Drops To GRUB Command Line?

May 28, 2011

I've recently had trouble reinstalling my Ubuntu system as I was getting various unusual errors as described in my old thread here. I thought it was probably something to do with my RAID-0 array which was pre-installed on my laptop from purchase being corrupted or something like that (if it's possible). I decided to simplify things for myself (not understanding RAID arrays much) so I just removed the RAID array and installed Windows and Ubuntu on the now separate hard disks. It worked fine.

I noticed quite a significant performance drop, however, with even Ubuntu boots taking longer than 30 seconds despite my laptop being both high-spec and only a few months old. Windows, as you can imagine, was dreadfully slow. I wasn't entirely convinced that this was entirely due to the loss of the RAID array - as even low-spec laptops with presumably no RAID arrays are supposed to boot Ubuntu in under 30 seconds apparently - but I read that RAID-0 arra

View 8 Replies View Related

Fedora :: Boot Can't Access Ext4 Partitions On LVM Logical Volumes On RAID Array?

Feb 8, 2011

i have a fedora 11 server which can't access the ext4 partitions on lvm logical volumes on a raid array during boot-up. the problem manifested itself after a failed preupgrade to fedora 12; however, i think the attempt at upgrading to fc12 might not have anything to do with the problem, since i last rebooted the server over 250 days ago (sometime soon after the last fedora 11 kernel update). prior to the last reboot, i had successfully rebooted many times (usually after kernel updates) without any problems. i'm pretty sure the fc12 upgrade attempt didn't touch any of the existing files, since it hung on the dependency checking of the fc12 packages. when i try to reboot into my existing fedora 11 installation, though, i get the following screen: (click for full size) a description of the server filesystem (partitions may be different sizes now due to the growing of logical volumes):

Code:

- 250GB system drive
250MB/dev/sdh1/bootext3
lvm partition rest of driveVolGroup_System
10240VolGroup_System-LogVol_root/ext4

[code]....

except he's talking about fake raid and dmraid, whereas my raid is linux software raid using mdadm. this machine is a headless server which acts as my home file, mail, and web server. it also runs mythtv with four hd tuners. i connect remotely to the server using nx or vnc to run applications directly on the server. i also run an xp professional desktop in a qemu virtual machine on the server for times when i need to use windows. so needless to say, it's a major inconvenience to have the machine down.

View 1 Replies View Related

Software :: Rebuilt RAID Array With Old Mount Points Present - File System Check Fails On Boot

Dec 2, 2009

I have one hard disk for my root partition and a disk array on a separate mount point. I rebuilt my disk array, but I didn't delete my original mount points beforehand because I was hoping it would just "pick up". So now when I boot up, the OS tells me that the filesytem check fails because it can't find the array to map to the mount point. I know that I need to edit my /etc/fstab and remove the line that defines my mount point on the disk array. But it appears to be read only filesystem when I am in repair mode. I can't force the write with vi.

View 3 Replies View Related

Ubuntu Servers :: RAID-5 Recovery (spare/active) / Degraded And Can't Create Raid ,auto Stop Raid [md1]?

Feb 1, 2011

Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.

Now the array is (logically) no longer able to start:

mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]

I was able to examine the disks though:

Code:
root@127.0.0.1:/etc# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 00.90.00
code....

Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.

View 4 Replies View Related

Ubuntu Servers :: Stuck At Boot With Raid (non Boot) In Fstab = Useless?

Nov 1, 2010

Forgive the terseness. I'm frazzled with this issue, perhaps I should have asked earlier. Every weekend for the past 2 months has been an endless cycle of 'repair broken system' off the install disk.

Installed from Ubuntu server 10.04LTS x86_64, + xfce-desktop Here is uname -a Linux ournas 2.6.32-25-server #45-Ubuntu SMP Sat Oct 16 20:06:58 UTC 2010 x86_64 GNU/Linux If I add my raid + lvm to the fstab file, the boot stalls, (no error it, just hangs waiting, forever). So that's a not very user friendly to start with.

I've tried the suggestions about UUID in fstab tried using LABEL instead, or even /dev/xxx. Every time it hangs. I've googled this endlessly and not found a solution. So don't ask why... since I seem to have tried every odd suggestion to fix this, I've lost track. There seems to be some consensus that whoever gave us plymouth laid an egg. Forgive me if I'm wrong, but did we need a better graphical boot if it breaks everything else?

[Code]...

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved