Hardware :: Mounting Raid Device After Reboot?

Mar 24, 2009

configure my raid devices so it can be started and mounted at start-up.

View 1 Replies


ADVERTISEMENT

Ubuntu Installation :: Mounting A NTFS Raid 0 Device?

Dec 27, 2010

let's say this system has 3 hard drives. Drive #1 and #2 are RAID 0 and Windows7 lives there. It is a hardware RAID, not software.

On Drive #3 Ubuntu has been installed using WUBI - it boots up and works okay - but it does not see the RAID array.

Do I just need a linux driver to be able to see & mount my "Windows" RAID0 array? Or is this even possible? Can anyone point me in the right direction?

View 1 Replies View Related

CentOS 5 :: Specify The Device Driver When Mounting A Device?

Jul 28, 2010

I have installed live cd on usb pendrive. Everything works great. How can I find out which device driver it is using? Where are the device driver files stored? How do you specify the device driver when mounting a device?

View 3 Replies View Related

Ubuntu Installation :: Dual Boot SSD Non Raid - 1 Terabyte Raid 1 Storage "No Block Device Found"?

Sep 15, 2010

It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.

I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)

I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.

View 4 Replies View Related

Red Hat / Fedora :: /home Not Mounting After Reboot?

Feb 25, 2011

I have number of Red Hat enterprise Linux installs. Versions 5.2 5.3 5.4 and 5.5. I have the same problem with all of them.

I have /home on it's on partition and every time I reboot /home does not mount.

Has anyone come across this issue before?

I've tried changing the FStab, but still having the same problem

View 4 Replies View Related

CentOS 5 :: Mounting A RAID Array?

May 11, 2011

Round two:I am trying to install a RAID 1 array on my system. I already have another RAID 1 array in there. I am using the BIOS RAID option to set up the array.Here's what dmraid -r tells me:

# dmraid -r
/dev/sda: pdc, "pdc_bajfedfacg", mirror, ok, 3906249984 sectors, data@ 0
/dev/sdb: pdc, "pdc_bajfedfacg", mirror, ok, 3906249984 sectors, data@ 0

[code]....

View 9 Replies View Related

Ubuntu :: NTFS Drive Not Mounting Upon Reboot

Oct 17, 2010

I used Wubi to install Ubuntu 10.10 onto my laptop alongside Windows 7. I need to access my windows harddrive, however, so I used NTFS Configuration Tool to mount the drive. However, whenever I reboot, it fails to mount and I actually have to go back into NTFS Config Tool, delete the old mount, and remount it. This is tedious. My /etc/fbstab file looks as follows:

[code]...

View 2 Replies View Related

Ubuntu :: Mounting Encrypted Raid From Terminal?

Jun 4, 2011

I have a RAID array that contains an encrypted volume that I setup using Disk Utility. What I want to do is mount this volume from the terminal and therefore be able to mount at login (as the pass phrase is saved). At the moment I have to manually click on the volume in Nautilus first before using. I've been trying to use the following command to no avail:

Code:

gvfs-mount -mount /dev/md1

which simply returns "Error mounting location: volume doesn't implement mount"

View 1 Replies View Related

Ubuntu :: Reboot And Select Proper Boot Device Or Insert Boot Media In Selected Boot Device And Press A Key

Apr 16, 2011

Reboot and select proper boot device or insert boot media in selected boot device and press a key. I got this error after: Reducing my Windows 7 partition by about 100gb. Creating a new partition (100gb) and copying my Ubuntu partition (10gb) to the new partition. After it was copied, and pasted, the original partition was deleted. I now had two partitions a new 100gb Ubuntu partition and a 600gb (or so) Windows 7 partition.

All of this was done using a bootable USB with Ubuntu 10.10 and GParted partition editor. Now when I boot I get the "Reboot and select proper boot device or insert boot media in selected boot device and press a key." error.

View 9 Replies View Related

Ubuntu Servers :: 10.4.1 Hang On Reboot After Mounting LVM Volume

Jan 4, 2011

I added another disk in server and create mount point in fstab:
/dev/VolGroup00/LogVol00 /opt ext3 defaults 1 2
Everything is working perfect... halt, boot, system... but when I wanna to reboot with a command sudo reboot, it hangs at the end of all initialization when it's rebooting and some number. If I remove disk in fstab, than reboot working.

View 3 Replies View Related

Ubuntu :: Mounting 5tb Windows Stripe Raid - Can't Seem To Get It To Recognize

Dec 30, 2010

Making the move to Ubuntu entirely but I want to get my 5TB raid function.

I can't seem to get it to recognize.

fdisk -l:

Code:

It works under windows? ( Its GPT I think btw)

Else I can re-partition and format it (its all backed up) but I want to be able to read the raid under windows as well...

View 9 Replies View Related

Ubuntu Servers :: RAID Array Not Mounting Correctly

Jun 6, 2011

I have an ubuntu 10.04 machine that I use primarily as a file server. I have a RAID5 array built with mdadm from 3 component disks that worked properly until a recent upgrade (I'm not sure exactly what broke it though). The array is /dev/md0 and is set to mount at /var/media on bootup. *Now*, when the system cold boots it hangs partway through the bootup sequence and throws the following error:

The disk drive for /var/media is not ready yet Press S to skip ... Once I "S"kip this manually, I can see that LOWER in the boot sequence mdadm gets called and assembles the drive, and once fully booted into the system I can then simply do a "mount -a" and the array mounts properly. SO... my gut feeling is that some portion of one of the upgrades changed the order in which things are called, and now the "mdadm assemble" is not triggered until AFTER the system tries to mount the drives. My problem is that I don't know the stuff that controls the boot sequence well enough to dig in the right place.

As a workaround I can remove that entry from /etc/fstab, but then (of course) the system won't auto-mount the array. It's better than the boot process completely hanging because as least THIS I can fix remotely, but I'd really like to know

1) why this broke in an upgrade and is it a known problem?
2) how to get it back to where it auto-assembles and then auto-mounts the array on bootup.

View 9 Replies View Related

Ubuntu Installation :: Raid 0 Is Not Mountable After Reboot

Jan 7, 2010

I currently am setting up an htpc running Karmic. The problem I am having is getting my Raid 0 to be mountable. My raid is not my boot partition, but is for data storage. My setup is a zotac motherboard with three sata connectors. I have a 300 GB drive, my eSata port, and my DVD attached to these. In the PCIe expansion slot I have installed Syba 2 port Sata PCIe 1a card using the Sil3132 sata II host chipset. Off of this I have 2 1.5Tib Hdds that I am setting up as Raid 0. During the boot I enter the chipset BIOS and establish this as a Raid 0.

When I install Karmic it sees the Raid and my 300 GB drive so I install to the 300 GB Drive and everything works fine. I am able to boot to the Hdd and run the OS. I then installed GParted and setup a partition on my RAID as GPT since I want one large partition of 2.78Tib. I then Format it through GParted as ext4 and I am able to mount and access it. I then reboot the system, and can no longer mount the filesystem. What I found interesting is If I reopen GParted I can then mount it. I traced it down to the fact that the until I access GParted the Block Special Device (sil_bgabagabaedd1) does not appear in /dev/mapper. Everytime I reboot I need to go into GParted to restore the Block Special Device then it is mounted. I think I am missing something in the raid setup as to why it is not being retained. What have I missed? What do I need to do to retain the Block Special Device? Is there a boot config setting?

Edit: I did further research and found that if I do kpartx it will appear just as gparted, but on reboot vanishes. I found something similar at this thread but not comfortable in updating dmraid: [URL] I think it is related to gpt and I will try to use a smaller partition to see if the behavior changes.

View 1 Replies View Related

Ubuntu :: Raid Array Inactive After Reboot?

Apr 24, 2011

I had some issues with my RAID6 array (with 15 disks), where 5 disks got disconnected (each five disks is connected to the motherboard via 1 SATA cable), which brought down the RAID array. I fixed this problem via readding the disks and running the array:

Code:

mdadm -R /dev/md1

However, after rebooting, the array appears inactive, and I have to go through the same motions to fix this and make it active.
The array is present in the /etc/mdadm/mdadm.conf, though also 2 other raid arrays (3 arrays total):

Code:

DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

[code]...

View 1 Replies View Related

General :: RAID Disk Inactive After Reboot

Aug 12, 2010

I just configured two raid setups but after a reboot they are not mounted and seem to be inactive.

md127 = sde1, sdf1 and sdi1 (raid 5)
md0 = sda1 and sdh1 (raid 0)
Code:
root@server /]# cat /proc/mdstat
Personalities :
md127 : inactive sdf1[1](S) sde1[2](S)
78156032 blocks
md0 : inactive sda1[0](S)
488382977 blocks super 1.2
unused devices: <none>

Code:
[root@server /]# fdisk -l | grep "Disk /"
Disk /dev/sda: 500.1 GB, 500107862016 bytes
Disk /dev/sdb: 80.0 GB, 80026361856 bytes
Disk /dev/sdc: 122.9 GB, 122942324736 bytes
Disk /dev/sdd: 160.0 GB, 160041885696 bytes
Disk /dev/sde: 40.0 GB, 40020664320 bytes
Disk /dev/sdf: 40.0 GB, 40020664320 bytes
Disk /dev/sdg: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdh: 251.0 GB, 251000193024 bytes
Disk /dev/sdi: 40.0 GB, 40020664320 bytes
Disk /dev/sdj: 500.1 GB, 500107862016 bytes

Code:
[root@server /]# cat /etc/mdadm.conf
DEVICE /dev/sdi1 /dev/sdf1 /dev/sde1 /dev/sda1 /dev/sdh1
ARRAY /dev/md127 UUID=5dc0cf7a:8c715104:04894333:532a878b auto=yes
ARRAY /dev/md0 UUID=65c49170:733df717:435e470b:3334ee94 auto=yes

As you can see they now show up as inactive. And for some reason sdi1 and sdh1 are not even listed. What can I do to get them back? To make matters worse I placed some important data on them, and even if I was clever enough to keep an extra copy on another drive, guess which drive that was? So, I need to get them activated as is (at least so I can get the data of them) before I can rebuild them from scratch. I'm running Mandriva 2010.1 and rated tehm using the built in disk partitioner.

View 14 Replies View Related

CentOS 5 :: Raid 5 Grow Does Not Survive Reboot?

Jun 22, 2010

I installed CentOS 5.5. After install, I decided to put 3 identical disk for raid 5. All the disks are IDE disk. Then I put a sata disk and partitioned it to add another partition to the raid 5 array. Everything works fine until I rebooted my system. After reboot, the sata partition I added into raid 5 is showing removed. I had to readd it using "mdadm --add" to make raid 5 array works.

View 5 Replies View Related

Debian :: SATA Raid, /home Partition Mounting Error?

Nov 9, 2010

I've installed debian squeezy recently and for some reason I have problems with mounting /home partition during startup.

There's an error:Mounting local filesystems...mount: special device /dev/mapper/isw_bbfedcffgi_Volume0p6 does not exist. failed

I've tried using fsck - no result the file system is healthy, I've tried formatting it once again (fresh copy, no user data) and it's not working. What is more mounting the partition manually goes well - I can read the data and write to it. All other partitions are ok.

I have no idea what's going on and why mounting /home fails. I've written this post on Polish debian users forum, but no response - only to give more info, so I'll put it here also:

ls -al /dev/mapper

crw------- 1 root root 10, 59 Nov 9 19:34 control
lrwxrwxrwx 1 root root 7 Nov 9 19:34 isw_bbfedcffgi_Volume0 -> ../dm-0
lrwxrwxrwx 1 root root 7 Nov 9 19:34 isw_bbfedcffgi_Volume01 -> ../dm-2
lrwxrwxrwx 1 root root 7 Nov 9 19:34 isw_bbfedcffgi_Volume05 -> ../dm-3
lrwxrwxrwx 1 root root 7 Nov 9 19:34 isw_bbfedcffgi_Volume06 -> ../dm-4
code....

View 2 Replies View Related

OpenSUSE Install :: Raid Root Partition Not Mounting After Updates

Jan 27, 2010

I'm having some problems with a hosted openSUSE 11.2 server. It was running fine until I did a "zypper up" to apply patches. This included a kernel update.

On reboot the root partition does not mount the / partition giving the following error:

Unrecognized mount option "defaults.noatime", or missing value mount: wrong fs type, bad option, bad superblock on /dev/md2.

Through an Ubuntu rescue disk (this is what Hetzner provides) the disk can be mounted without problems.

( I installed a fresh openSUSE 11.2 with a similar configuration and got the same results after the update)

The server is a hosted installation from Hetzner in Germany with just the basics for LAMP setup.

The disk setup is as follows using software raid1:
swap /dev/md0 (/dev/sda1 /dev/sdb1)
/boot /dev/md1 (/dev/sda2 /dev/sdb2)
/ /dev/md2 (/dev/sda3 /dev/sdb3)

View 3 Replies View Related

OpenSUSE Install :: Mounting Software RAID In Rescue Mode

Feb 27, 2011

I have a software RAID 1 (mirroring) on my 2 hard drives configured through OpenSuse 11.3 installer. When I boot from the OpenSuse 11.3 install DVD in rescue mode, the RAID isn't recognized, ie attempting to mount /dev/md0 results in 'bad superblock' messages. I can still mount individual disks in the array though (/dev/sda1, /dev/sdb1, I did it read-only so not to corrupt the array). I also tried booting from the Centos 5.5 install DVD in rescue mode on the same computer and it has no problems finding the RAID 1. I was able to mount /dev/md0. Is the OpenSuse 11.3 install DVD in the rescue mode not supposed to find the RAID 1 or am I doing something wrong here?

View 3 Replies View Related

CentOS 5 Hardware :: Mounting RAID Array On Samba Server?

Jun 23, 2009

I need to mount my raid array on CentOS 5.2 samba server.

Here are my hardware specs:
Motherboard: Tyan S2510 LE dual PIII
CPU's: Intel PIII 850ghz socket 370
Memory: 4 gig Crucial 133 ECC SDRAM
OS: 2 x'x IBM Travelstar 6.4 gig 2.5 hard drives, (low heat/noise)
Storage: 4 x's Seagate 500 gig IDE 7200 rpm
RAID controller: 3Ware 7500-12 controller, (RAID 5) (66 mhz PCI bus)
NIC: 3COM 3C996B-T gigabit NIC, (66 mhz PCI bus)

I have the 2 IBM's set as RAID 1, (mirror) and the 4 Seagates as RAID 5, (1.5 TB) I have installed the OS with minor problems, (motherboard doesn't like the 2.6.18-128.1.14.el5 kernel, removed it from my grub.conf).

My problem is mounting the RAID array. I have done the following:
formatted with fdisk;
fdisk /dev/sdb
Then formatted with the following command;
mkfs.ext3 -m 0 /dev/sdb

The hard drive was formatted with the ext3 files system, but I have mounted it as an ext2 file system as I don't want 'journaling' to occur. I then edited my /ect/fstab like this: .....

Then: mount -a
When I go into my "home" directory and type ls, I get the following:
[root@hydra home]# ls -l
total 24
drwx------ 2 zog zog 4096 Jun 23 15:50 zog
lrwxrwxrwx 1 root root 6 Jun 23 15:46 home -> /home/
drwxrwxrwx 2 root root 16384 Jun 23 15:34 lost+found
drwxr-xr-x 2 root root 4096 Jun 23 17:18 tmp
Why my home directory is showing under home?

View 5 Replies View Related

Fedora :: Software Raid 5 - UUID And Md Number Changes On Reboot?

Jun 18, 2010

So I recently set up a fedora 13 server using software raid. Let me go over the initial install and maybe that will help explain why I'm running into problems with one of the arrays. During installation I had only 2 disks in the equipment (WD 750GB each) Partitioned them thusly:

Code:
Disk /dev/sda: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00062206

[Code]...

View 3 Replies View Related

Debian :: Raid - No Booting / Reboot The System Does Not Boot?

Nov 5, 2010

There seems to be a problem with Raid on Debian. I got a new Fujitsu Primergy TS 100 S1 server, with hardware Raid (and 2 disks) installed everything nicely over the net including GRUB - but when it comes to reboot the system does not boot.

Is there anybody here who knows about the problem?

View 1 Replies View Related

General :: RAID 1 With 2 Disks Starts Degraded After Reboot From 3rd

Jun 20, 2010

Basically, I installed Debian Lenny creating two RAID 1 devices on two 1 TB disks during installation. /dev/md0 for swap and /dev/md1 for "/"
I did not pay much attention, but it seemed to work fine at start - both raid devices were up early during boot, I think. After that I upgraded the system into testing which involved at least upgrading GRUB to 1.97 and compiling & installing a new 2.6.34 kernel ( udev refused to upgrade with old kernel ) Last part was a bit messy, but in the end I have it working.

Let me describe my HDDs setup: when I do "sudo fdisk -l" it gives me sda1,sda2 raid partitions on sda, sdb1,sdb2 raid partitions on sdb which are my two 1 TB drives and sdc1, sdc2, sdc5 for my 3rd 160GB drive I actually boot from ( I mean GRUB is installed there, and its chosen as boot device in BIOS ). The problem is that raid starts degraded every time ( starts with 1 out of 2 devices ). When doing " cat /proc/mdstat " I get "U_" statuses and 2nd devices is "removed" on both md devices.

I can successfully run partx -a sdb, which gives me sdb1 and sdb2 and then I readd those to raid devices using " sudo mdadm --add /dev/md0 /dev/sda1 ". After I read devices it syncs the disks and after about 3 hours I see fine status in mdstat. However when I reboot, it again starts with degraded array. I get a feeling that after I read the disk and sync array I need to update some configuration somewhere, I tried to " sudo mdadm --examine --scan " but its output is no different from my current /etc/mdadm/mdadm.conf even after I readd the disks and sync.

View 1 Replies View Related

General :: Hard Drive Dropping Out Of RAID On Reboot

May 30, 2011

Every time I reboot my server, one of my hard drives drops out of the RAID5 array. I'm pretty sure that there's nothing wrong with the drive itself. I bought all three drives at the same time, and they are identical in make/model/capacity. While the server is running, it's smooth sailing. However, whenever I shut down or reboot, I get an email message that the array is degraded. It's always /dev/sda1 that drops out of the array. I can always rebuild the array by adding the partition back in, but it's a bit of a pain. Any suggestions on how to troubleshoot this?

View 13 Replies View Related

Ubuntu Servers :: Mounting Existing RAID Array On Fresh Installation?

Aug 1, 2011

I'm running 10.04 x86 server with a really simple installation on a single 250GB boot disk. I then have a RAID5 array as /dev/md0 (set up using mdadm with x4 2TB disks). All is working well. My mdadm.conf file looks like this

Code:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.

[code]....

if I was to lose the boot disk and need to remount the RAID array on a fresh installation, what steps do I need to go through. My assumption is that the superblocks on the RAID disks will be used and I don't need to keep any additional information - is this right?

View 6 Replies View Related

Ubuntu :: Mounting None On /dev Failed: No Such Device?

Jan 31, 2011

I'm running Lucid with the 2.6.32-28-generic and 2.6.31-11-rt(abogani's ppa) kernels, on a fairly new laptop.When I boot the rt-kernel, right after grub I get a message:Code:mount: mounting none on /dev failed: no such deviceW: devtmpfs not available, falling back to tmpfs for /devThen after a few seconds, I see a lot of fast moving text and then Ubuntu starts just fine.when booting the generic kernel, I don't see any text at all, but it uses just as long time as the rt-kernel, so I guess the problem affects both.

View 5 Replies View Related

Hardware :: Mounting And Accessing A Usb Device?

Jun 1, 2010

So I am using a debian 5.0.4 version of Linux with a command-line interface (the bash shell).I have been trying to get my usb device to be readable, which I suppose means I need to run a mount command like mount -t usbfs /dev/usb /media/usb But I have not had any luck getting to access my usb drive...I run lsusb and find that it is attached, I can find it in the /proc directory, but I cannot access my usb drive

View 2 Replies View Related

Slackware :: Mounting Device As User In KDE?

Mar 30, 2010

when I login as a user and went to KDE environment, I wasn't able to mount a device because access was denied. What is the best way to achieve this?

View 2 Replies View Related

Ubuntu Servers :: RAID Volume Won't Activate After Reboot During Sync?

Jul 24, 2010

I've got a file server with two RAID volumes. The one in question is 6 1TB SATA drives in a RAID6 configuration. I recently thought one drive was becoming faulty, so I removed it from the set. After running some stress tests, I determined my underlying problem hadn't cleared up. I added the disk back, which started the resync. Later on, the underlying problem caused another lock up. After a hard-reboot, the array will not start. I'm out of ideas on how to kick this over.

Code:
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

[code]....

View 4 Replies View Related

Ubuntu :: Authentication For Mounting Device W/o Fstab?

Feb 9, 2010

I use Ubuntu 9.04 I have a problem. I have a file, containing ext2 file system ( mke2fs -F ~/fs.ext2 )Is it possible to mount this file by user (not root) without editing fstab and from terminal?

P.S. Using /dev/loopN is not what I need. Maybe, it's possible to use FUSE, is it?

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved