Ubuntu Installation :: Raid 0 Is Not Mountable After Reboot
Jan 7, 2010
I currently am setting up an htpc running Karmic. The problem I am having is getting my Raid 0 to be mountable. My raid is not my boot partition, but is for data storage. My setup is a zotac motherboard with three sata connectors. I have a 300 GB drive, my eSata port, and my DVD attached to these. In the PCIe expansion slot I have installed Syba 2 port Sata PCIe 1a card using the Sil3132 sata II host chipset. Off of this I have 2 1.5Tib Hdds that I am setting up as Raid 0. During the boot I enter the chipset BIOS and establish this as a Raid 0.
When I install Karmic it sees the Raid and my 300 GB drive so I install to the 300 GB Drive and everything works fine. I am able to boot to the Hdd and run the OS. I then installed GParted and setup a partition on my RAID as GPT since I want one large partition of 2.78Tib. I then Format it through GParted as ext4 and I am able to mount and access it. I then reboot the system, and can no longer mount the filesystem. What I found interesting is If I reopen GParted I can then mount it. I traced it down to the fact that the until I access GParted the Block Special Device (sil_bgabagabaedd1) does not appear in /dev/mapper. Everytime I reboot I need to go into GParted to restore the Block Special Device then it is mounted. I think I am missing something in the raid setup as to why it is not being retained. What have I missed? What do I need to do to retain the Block Special Device? Is there a boot config setting?
Edit: I did further research and found that if I do kpartx it will appear just as gparted, but on reboot vanishes. I found something similar at this thread but not comfortable in updating dmraid: [URL] I think it is related to gpt and I will try to use a smaller partition to see if the behavior changes.
I had some issues with my RAID6 array (with 15 disks), where 5 disks got disconnected (each five disks is connected to the motherboard via 1 SATA cable), which brought down the RAID array. I fixed this problem via readding the disks and running the array:
Code:
mdadm -R /dev/md1
However, after rebooting, the array appears inactive, and I have to go through the same motions to fix this and make it active. The array is present in the /etc/mdadm/mdadm.conf, though also 2 other raid arrays (3 arrays total):
Code:
DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes
I've got a file server with two RAID volumes. The one in question is 6 1TB SATA drives in a RAID6 configuration. I recently thought one drive was becoming faulty, so I removed it from the set. After running some stress tests, I determined my underlying problem hadn't cleared up. I added the disk back, which started the resync. Later on, the underlying problem caused another lock up. After a hard-reboot, the array will not start. I'm out of ideas on how to kick this over.
As you can see they now show up as inactive. And for some reason sdi1 and sdh1 are not even listed. What can I do to get them back? To make matters worse I placed some important data on them, and even if I was clever enough to keep an extra copy on another drive, guess which drive that was? So, I need to get them activated as is (at least so I can get the data of them) before I can rebuild them from scratch. I'm running Mandriva 2010.1 and rated tehm using the built in disk partitioner.
I installed CentOS 5.5. After install, I decided to put 3 identical disk for raid 5. All the disks are IDE disk. Then I put a sata disk and partitioned it to add another partition to the raid 5 array. Everything works fine until I rebooted my system. After reboot, the sata partition I added into raid 5 is showing removed. I had to readd it using "mdadm --add" to make raid 5 array works.
So I recently set up a fedora 13 server using software raid. Let me go over the initial install and maybe that will help explain why I'm running into problems with one of the arrays. During installation I had only 2 disks in the equipment (WD 750GB each) Partitioned them thusly:
There seems to be a problem with Raid on Debian. I got a new Fujitsu Primergy TS 100 S1 server, with hardware Raid (and 2 disks) installed everything nicely over the net including GRUB - but when it comes to reboot the system does not boot.
Is there anybody here who knows about the problem?
Basically, I installed Debian Lenny creating two RAID 1 devices on two 1 TB disks during installation. /dev/md0 for swap and /dev/md1 for "/" I did not pay much attention, but it seemed to work fine at start - both raid devices were up early during boot, I think. After that I upgraded the system into testing which involved at least upgrading GRUB to 1.97 and compiling & installing a new 2.6.34 kernel ( udev refused to upgrade with old kernel ) Last part was a bit messy, but in the end I have it working.
Let me describe my HDDs setup: when I do "sudo fdisk -l" it gives me sda1,sda2 raid partitions on sda, sdb1,sdb2 raid partitions on sdb which are my two 1 TB drives and sdc1, sdc2, sdc5 for my 3rd 160GB drive I actually boot from ( I mean GRUB is installed there, and its chosen as boot device in BIOS ). The problem is that raid starts degraded every time ( starts with 1 out of 2 devices ). When doing " cat /proc/mdstat " I get "U_" statuses and 2nd devices is "removed" on both md devices.
I can successfully run partx -a sdb, which gives me sdb1 and sdb2 and then I readd those to raid devices using " sudo mdadm --add /dev/md0 /dev/sda1 ". After I read devices it syncs the disks and after about 3 hours I see fine status in mdstat. However when I reboot, it again starts with degraded array. I get a feeling that after I read the disk and sync array I need to update some configuration somewhere, I tried to " sudo mdadm --examine --scan " but its output is no different from my current /etc/mdadm/mdadm.conf even after I readd the disks and sync.
Every time I reboot my server, one of my hard drives drops out of the RAID5 array. I'm pretty sure that there's nothing wrong with the drive itself. I bought all three drives at the same time, and they are identical in make/model/capacity. While the server is running, it's smooth sailing. However, whenever I shut down or reboot, I get an email message that the array is degraded. It's always /dev/sda1 that drops out of the array. I can always rebuild the array by adding the partition back in, but it's a bit of a pain. Any suggestions on how to troubleshoot this?
I recently upgrade to Fedora 14 from 13. It was an in-place upgrade. I can't recall for sure, but I do believe I had these problems in F13 before the upgrade. The F13 install was from a Live CD. Anyway, I have a three drive RAID 5 array setup - 3x 750GB. For some very annoying reason, each time I reboot my F14 system, it hangs with an error about not being able to find a superblock on /dev/md126 and /dev/md127. I have tried to stop and remove /dev/md126 and /dev/md127 but they always seem to come back. I have also noticed in the output of fdisk -l that drives sda and sdd like to swap places sometimes for an unknown (to me) reason. Any other output that is needed, please ask. I recreated the array just yesterday with:
Code: mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 I would cat mdadm.conf in /etc, but I removed it previously to try to figure out the problem and it was not
I have installed Ubuntu on my m1530 since 8.04 and currently dual boot Win7 and 10.10. I would like to dual boot on my PC, but I have run into a problem. I am not a pro at Ubuntu, but this problem I can not solve by reading forums like I have in the past.
I realize this is a common problem, but I have noticed people having success.
I have a M4A87TD EVO MB with two Seagate drives in Raid 0. (The raid controller is a SB850 on that MB) I use the raid utility to create the raid drive that Windows7x64 uses. I have 2 partitions and 1 unused space. Partition 1 is Windows, partition 2 is for media, and the remaining unused space is for Ubuntu.
I am running ubuntu-10.10-desktop-amd64 off a Cruzer 16GB flash drive that was installed via Universal-USB-Installer-1.8.1.4.
My problem like so many others is that when I load into Ubuntu, gparted detects two separate hard drives instead of the raid. I read that this is because kpartx is not installed on 10.10. I then went in LiveCD mode and downloaded kpartx from Synaptic Manager. Gparted still reported two drives. I opened terminal and run a few commands with kpartx. I received an error. (Forgive me I didn't write it down, but I believe it said something about a communication error. I will try again later and see.)
Currently I am reflashing the Cruzer with a persistence of 4GB. I am not familiar with this process, but I understand that my LiveCD boot will save information I download to it. I decided to try this method because I was going to install kpartx and reboot to see if this made a difference.
I am looking for any suggestions on a different method or perhaps someone to tell me that the raid controller or some hardware isn't supported. I did install ubuntu-10.10-alternate-amd64 on my flash drive, but fail to get past detecting my CD-ROM drive since it's not plugged in. If this method is viable, I will plug it in. I also watched the ..... video were a guy creates Raid 0 with the alternated CD, but it wasn't a dual boot and didn't use a raid controller from a MB.
1st I am relatively new to linux (but not to *nix). I have 4 disks assembled in the following intel ahci bios fake raid arrays:
2x320GB RAID1 - used for operating systems md126 2x1TB RAID1 - used for data md125
I have used the raid of size 320GB to install my operating system and the second raid I didn't even select during the installation of Fedora 14. After successful partitioning and installation of Fedora, I tried to make the second array available, it was possible to make it visible in linux with mdadm --assembe --scan , after that I created one maximum size partition and 1 maximum size ext4 filesystem in it. Mounted, and used it. After restart - a few I/O errors during boot regarding md125 + inability to mount the filesystem on it and dropped into repair shell. I commented the filesystem in fstab and it booted. To my surprise, the array was marked as "auto read only":
[Code]...
and the partition in it was not available as device special file in /dev:
I have a 10x2tb disk array that i'm trying to build into a single software raid 5 i have tried this 2 times now the first it made it to 58.7% and the machine locked up and the array would not restart after a reboot. On my 2nd try all was looking good until about 50% i noticed that the speed dropped in 1/2 and that ksoftirqd/2 is taking up a lot of cpu (about 90%) the md0_resync and md0_raid5 are also taking 60-90% when the build started they took 7%. when i do a dmesg i see a lot of the message compute_blocknr: map not correct.
For a little info on the physical setup this is running on an Atom 510 with 2GB of mem the drives are connected to an addonics 4-Port RAID 5 / JBOD SATA II PCI Controller using the sil3124 chipset. I'm using 2 addonics 5X1 SATA Port Multiplier connected to the controller to get the 10 drives attached. All drive show up and don't seem to have any issues. I'm running a fully updated as of 3/20/10 version of centos 5.4
I will let this continue to run over night but i expect it to be locked up by morning if it follows what the last attempt did.
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I set up a 4 disk RAID 5 arry using the Palimpsest utility several months ago, and all was well until last week. After a hard reboot, the RAID array failed to come up with the message: Failed activating drive: Error assembling array: mdadm exited with exit code 1: mdadm: cannot open device /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 has no superblock - assembly aborted
The disk in question was still visible in Palimpsest, and the test utility succeeded. After searching the archives, I checked /proc/mdstat to see if another array had possession of the device. No such luck. I looked in syslog and found this, but don't know how to proceed:
Mar 28 08:42:26 david-phenom kernel: [ 61.297959] md: md0 stopped. Mar 28 08:42:26 david-phenom kernel: [ 61.418841] md: bind<sdc1> Mar 28 08:42:26 david-phenom kernel: [ 61.424179] md: bind<sdb1> Mar 28 08:42:26 david-phenom kernel: [ 61.424384] md: bind<sda1> Mar 28 08:42:26 david-phenom kernel: [ 61.424543] md: bind<sdd1>
It's been a real battle, but I am getting close.I won't go into all the details of the fight that I have had, but I've almost made it to the finish line. Here is the set up. ASUS Z8PE-D18 mother board 2 CPU, 8 Gig Ram. I recently added an OCZ Agility SSD, defined a raid 1 virtual disk on the 1 terabyte WD HDD drives, which will holds all of my user data, the SSD is for executables.The bios is set to AHCI. Windows 7 installed fine, recognizes the raid VD just fine.
I installed Ubuntu 10.04 by first booting into try and mode, then opening a terminal and issuing a "sudo dmraid -ay" command. Then performing the install. I told it to install the raid components, and told it to let me specify the partitions manually. When setting up the partitions, I told it to use the free space I set aside on the SSD from the Windows 7 install as ext4 and to mount root there. Ubuntu installed just fine, grub2 comes up just fine, and Windows 7 boots with out a hitch, recognizing the mirrored partition as I indicated previously. When I tell grub to boot linux however, it pauses and I get the "no block devices found" message. It will then boot, but it does not recognize the raid array. After Ubuntu starts up I can run "dmraid -ay" and it recognizes the raid array, but shows the two component disks of the raid array as well. It will not allow the component disks to be mounted, but they show up which is annoying. (I can live with that if I have to)
I have fixed a similar problem before by setting up a dmraid script in /etc/initramfs-tools/scripts/local-top ... following the instructions found at the bottom of this blog:[URL].. To recap: My problem is that after grub2 fires up Ubuntu 10.04.1 LTS (Lucid Lynx), it pauses, and I get "no block devices found" It then boots but does not recognize the raid array untill I manually run "dmraid -ay". I've hunted around for what to do but I have not found anything. It may be some timing issue or something, but I am so tired of beating my head against this wall.
I can't get Ubuntu to mount my Sony DRU540A, no matter what I do. Audio CDs, Data CDs and burning are impossible, does anybody have any ideas on how to mount this unmountable drive? It worked well enough to install Jaunty Jackalope, and I never had any problems using this drive with windows xp, but now that I've upgraded Karmic Koala it doesn't seem to work no matter what I try.
I've upgraded to the new 10.04, and when I have the Alternate CD in, it shows on my desktop, and I can search all the contents of the CD, so I know it's registering. But when I try to install anything, via the terminal, .deb packages, synaptics package manager, it always asks me to insert the CD. Then I click ok and it says it's not mountable, even though I know it is because it's on my desktop.
i have a cryptsetup container, which after freshly setting up the computer isn't mountable anymore. Google didnt help me much soSince i use a keyfile, it cant be the passphrase.This is what i do:Quote:
losetup /dev/loop0 /home/data.img cryptsetup -d super-secret-key.file data /dev/loop0 mount /dev/mapper/data /data
I have the ndiswrapper util installed on my computer, however, I have been unable to install the corresponding driver from the provided cd. I have a WNA 1100 wireless adapter. I have not been able to locate a mountable .inf file on the cd. Only a "setupt.exe" or a trans.tbl file.
After I upgraded from Lenny to Squeeze, Thunar (in Xfce4) can't seem to detect when a CD is placed in the CD-ROM drive. When I put a USB in the USB port, Thunar adds an icon to the desktop, but with data CDs this isn't happening, and I must mount them from the terminal. However, CDs made from an ISO, for example, my Debian install CD, are detected and appear on the desktop.
i booted up this morning and got a surprise. when i plug in a usb drive normally, it pops up in the k message box, and i can choose too mount it. then it mounts no problem.
when i turned on the computer today, however, i plugged in my usb drive, and clicked to mount it, and lo and behold, i got a strange message: 'Could not mount the following device: USB 8GB'
im not entirely sure what this means. i can mount the drive manually through the terminal if it's running as root, but thats not a particularly practical way to do things, as i am a student and often have to switch USB drives and ipods many times a day.
I have a compact-flash card that has some corruptions. I woiuld like to try to fix this by copying out the data from the flashcard using ddrescue or something and then trying to fix the filesystem outside the flash.
BUT Fedora does not allow me to really mount / make a block device of the usb-device, so I can not use ddrescue to copy out the disk-image.
Why is my Nikon camera not a block device, and then the other "GUI-applications" triggered on that device in stead?
I am using Safesync from Trendmicro to backup my data. Since some time I am experiencing heavy problems. Normally my Backup should look like this: But when I am mounting it with WebDAV, it is looking different:
Code: cdrewing@christian-desktop:~$ sudo mount -t davfs http://dav.trendmicro.safesync.com /media/Safesync [sudo] password for cdrewing: Gib bitte den Benutzernamen f�r den Server http://dav.trendmicro.safesync.com an; wenn du keinen angeben willst, dr�cke Return.
[Code]...
Note: When I am mounting Safesync with Nautilus, everything is functional and ok. But I need my command line access because I want to use rsync. A workaround could be to access the resource via the nautilus mount. Where can I find this via the command line?
I have attached an external USB disk to my debian gnu/linux system. The disk showed up as device /dev/sdc, and I prepared it like this:created a single partition withfdisk /dev/sdc (and some more commands in the interactive session that follows)formatted the partition withmkfs.msdos /dev/sdc1If I then attach the USB disk to a Windows XP or Vista system, then no new drive becomes available. The disk and its partition show up fine in the disk managment tool under "computer management", but apparently the file system in the partition is not recognized.How do I create a FAT32 file system which can actually be used in windows?edit:I've given up on this and went with a NTFS file system created by windows. In debian lenny this can be mounted read-write but apparently it requires you to install the "ntfs-3g" package and explicitly pass the -t ntfs-3g option to the mount command.
I'm familiar with the software and hierarchy of the mount command but I can't find any info on why it is needed or preferred. What are the physical aspects of it? What is the burden of having files accessible all the time?
I'm totally unable to play or mount audio CDs and unable to use video DVD's on my box right now. When I insert an audio CD I get a dialog box saying:
Unable to mount disc: Location is not mountable
and if I check messages in the Log File Viewer I see the following text repeated (save for the last line) a couple of dozen times.
Code:
May 24 13:04:37 Amber kernel: [ 6680.434685] sr 0:0:0:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE May 24 13:04:37 Amber kernel: [ 6680.434691] sr 0:0:0:0: [sr0] Sense Key : Illegal Request [current] May 24 13:04:37 Amber kernel: [ 6680.434695] sr 0:0:0:0: [sr0] Add. Sense: Illegal mode for this track
[code]...
on a similar but ultimately unrelated thread someone was asked to enter a few commands so I tried them and the following was the output:
Code:
main@Amber:~$ ls /dev/scd* /dev/scd0 main@Amber:~$ sudo mount /dev/cdrom /media/cdrom0
[code]...
An error occurred: Could not open location; you might not have permission to open the file. As far as I can tell I have the relevant CSS libraries installed I'm lost without being able to play CD's and DVD's on my system.