I tried setting up my own partition table which apparently didn't go well.I have 1 compactflash-disk for linux and 2 hard drives for data which are set up for RAID1. But the RAID-drives doesn't get mounted.This is my first RAID-setup
me@server:~$ df -h
Filesystem Size Used Avail Use% Mounted on
i have 2 shiny new 1TB SATA drives. I have a running system already, basic install on a single disk. install both new drives & raid1 them on a single partition - mount them for storage. Seems simple but all i can find are howto's on instaling the whole system & booting on RAID.
I searched, and unfortunately all that did was raise my confusion level on this. "Grub"? "mdadm"? "fake"?, S/Wraid? Disk utility? So many options, so little understanding! Relevant stuff (Mostly reported by "Disk utility"):
- Lucid Lynx. PATA host adaptor -> IDE controller -> Maxtor 164GB H/D [I *think* this has Windoze on it, but can't remember!] There's also a CD & DVD drive on this IDE bus. PATA host adaptor -> SATA controller -> Seagate 500GB H.D. - The Ubuntu boot drive, currently a 250GB ext4 partition, 3GB swap and 250GB "unused".Peripiheral devices -> Firewire 400 -> 2 x Samsung 500GB H/D's. These were "stolen" from my Mac Book and are currently a RAID1 Apple array. Everything they contained is safely backed up, and these can be considered as "new" drives awaiting formatting. [It's actually a Buffalo Drivestation Duo, but their site was even more confusing than here.
everything is working wonderfully, but I'd like to use the 2 F/W drives in a RAID1 array - So, eventually to the question: How do I tell Ubuntu to use these drives as a RAID array? It seems I can format and partition etc from disk utility. Do I then use mdadm for configuration? Any other recommendations?
My old workhorse computer's motherboard died yesterday, and I want to get everything off of its RAID1 array. I have a backup on an external drive but it's a few weeks old and I'd like to make sure I've got everything.
The old machine ran Slackware 12.1, and had a 2-drive IDE 250GB RAID1 array with partitions:
md0 - swap md1 - /
The new machine has Slackware 13.1, and also has a 2-drive SATA 250GB RAID1 array with partitions:
md0 - / md1 - swap md2 - /home
I put the IDE drives from the old computer into the new one. I'm not sure how to get the old array going now. I'm not sure if I should use mdadm with --assemble (since the array was already set up before) or with --create (because the array needs to be renamed so it doesn't clash with the new computer's md1). I'm thinking I should use --create and give the old md1 a new name (md3). But I'm not sure if anything bad will happen if I use --create on an array with data on it.
The old drives are the last two entries, sdc and sdd. It's odd that the order is reversed. I had (non-RAID) Windows partitions on both drives and I've mounted them and verified that the drives are on the IDE cable in the right order, and sdc is the original 1st drive of the array, and sdd is the 2nd drive of the array.
I'm running a software RAID1 on 3xSATA drives. I'd like to make one a spare. How do I do this? Is there a way so that once I do when I have a drive failure the spare will automatically mount and be made a mirror?
I'm not entirely a newbie, but this seems like such a simple question I'm not sure where else to ask it. I checked through the various HOWTOs and searched already and didn't find a clear answer, and I want to know for sure before we start investing in hardware. Is is possible to create a RAID1 (mirroring only) array with 3 live drives, rather than with 2 live plus a spare? Our goal is to have 3 drives in a hot-swap bay, and be able to pull and replace one drive periodically as a full backup. If I do:
I have bought a ICYBOX IB-NAS4220-B a while ago and kept getting issues with it (going down and not restarting, very slow etc). 2 weeks ago one more issue arose and I couldn't restart or reconnect to the box so decided to take the disks out and recover my data to a 5BIG Lacie. The IcyBox uses a software RAID1 and format drives in EXT3. Being a Linux system I thought I could easily recover data from an Ubuntu box so installed the latest version as CD boot wouldn't give me satisfactory results. I am now stuck with both 1TB drive plugged into my Ubuntu machine and can't seem to be able to mount the drives.
I have just upgraded to Fedora 14 from an older version. I now have problems mounting my RAID1 array, which was operating correctly until now. This is a software RAID which was initially built under Fedora 10.The array is md0, and is made of 2 SATA drives (sdc and sdd) which have only one partition. The underlying filesystem is NTFS. The array is assembled correctly and active, as reported by /proc/mdstat and mdadm -D.When I try to mount the array, I get this:
Code: [root@Goofy ~]# mount /dev/md0 /mnt/raid mount: you must specify the filesystem type
I'm new to Centos and very new to RAID/hd setup. I have a old HP proliant G3 ML150. I have no drivers cd or other, only the server. I have created a RAID1 array (named SYSTEM) with 2 HD of 250GB from the controller and have installed Centos 5.2 (updated after to 5.6). The installation is ok. Now I have added 2 HD of 1TB each and have created another RAID1 array (named DATI) from the controller. This RAID is to store data files. (And next I have to add another RAID1 for backup, but this is to do next week). how can I format and add it to Centos so I can use it?
I have trawled through an extensive number of post on quite a few forums without even a step forward with this.
I have a fedora 13_x64 system with software raid1 for /boot and / (md0 and md1 respectively) , swap is not raided.
I was doing an yum update through the software updater in gnome and the system froze.
I had to press reset to get any response from the machine.
Since then I have been getting the kernel panic above just after grub starts fedora.
I tried the previous kernel from the previous update and it has the same error.
At the worst I am prepared to load OS again but there is still some info and configs that I would like to access from the raid partitions before I go ahead. Is there any way to access these partitions through a live CD or rescue environment?
Is there a method to bring this install back to life? or am I looking at a reinstall?
/dev/md0 (made from sda1 and sdb1) RAID1 /boot partition /dev/md1 (made from sda2, sdb2, and sdc2) RAID5 / partition
Earlier on I had some trouble with my sda drive, it dropped itself from both arrays, screwing up the mirroring of my two raid partitions participating in the /boot partition. I eventually got everything sorted out and back in sync. (I also have grub installed to MBR on both sda and sdb). Things are working fine regarding that, but since then I've had this issue:
During boot up, I'll get an error message that it could not mount my /boot partition (when fstab is set to either /dev/md0 or the UUID). It claims c9ab814c-47ea-492d-a3be-1eaa88d53477 does not exist!
[mark@mark-box ~]$ cat /etc/fstab # # /etc/fstab # Created by anaconda on Wed Jan 20 16:34:41 2010
As far as I know, it isn't neccessary for /boot to be mounted always, correct? Although, as I understand, I need to have it mounted whenever making kernel changes correct?
I have recently set up an ubuntu installation on an old PC. After some fiddling with both it, and the windows 7 machine, I have managed to share all of my drives. However, when attempting to access them from ubuntu, only 2 of the 4 hard disk shares will mount, with the other 2 failing with a Unable to mount location, failed to mount windows share error message.
ubuntu 9.10 when I try to mount internal drivereceive the following massage Error mounting: mount exited with exit code 1: helper failed with:Remounting is not supported at present. You have to umount volume and then mount it once again
I like the 20 second boot from press of power button with my new install, BUT I can't mount drives with xfce's thunar. I can mount them with thunar but this way they still do not show up under places or on my desktop. How do I figure out how to mount them properly?
The other one mounts fine. They are all separate physical drives. Another oddity is in that it lists those two drives which are SATA as PATA, but I imagine that is something to do with my BIOS settings being on compatibility settings.
I have recently installed Ubuntu 10.10 on my machine as dual boot using WUBI but on a seperate partition to Windows. Loving it so far, but i cannot get any external drives to mount - i've tried pen drives, camera memory cards and hard drives but nothing comes up.
I have just tried restarting with a pen drive plugged in, and it finally showed something in the computer folder - "memory stick drive" is shown (and my internal CD drive, which i'm not sure was there before.), but i still can't access it and when I try to unmount it gives me the message
Error detaching: helper exited with exit code 1: Detaching device /dev/sdc USB device: /sys/devices/pci0000:00/0000:00:1d.7/usb1/1-1) SYNCHRONIZE CACHE: FAILED: No such file or directory (Continuing despite SYNCHRONIZE CACHE failure.) STOP UNIT: FAILED: No such file or directory
I have an Acer Aspire 3500 laptop that I'm running 10.04 on, pretty much everything works OK, and I don't appear to have any hardware problems (I've checked using Gnome Device Manager). When I plug in a USB flash or hard drive, I don't get any drives/devices to mount, although in Gnome Device Manager the USB device appears as a USB Mass Storage Device.
Running tail -f /var/log/messages produces this:
Dec 10 19:44:31 darren-laptop kernel: [ 5800.632058] usb 1-3: new high speed USB device using ehci_hcd and address 4 Dec 10 19:44:31 darren-laptop kernel: [ 5800.765161] usb 1-3: configuration #1 chosen from 1 choice
After a bit of a rough install, I got 10.04 up and running on an Intel D845GRG motherboard. All seems to be working fine except for USB flash drives. My USB mouse and keyboard work fine, but the two sticks I have (Kingston and PQI) will not mount.
Can't mount external usb drives. There are no errors, they just don't show up anywhere.
Also Trash icon has disappeared from bottom panel, is inaccessible from Nautilus - "Sorry, could not display all the contents of "trash": Operation not supported" - and Desktop icons default to 'Keep Aligned' every time I restart.
etc/fstab with a flash drive and an external HDD plugged in code...
i would like to have all my ntfs drives mount @ start up here is the command im currently useing sudo mount -t ntfs-3g /dev/sdc1 /media/D -o forcei have made the folders D E F etc now i know that the command for starting restarting and stoping samba changed in 10.04 so did something change with mounting ntfs drives
I've been trying to unsuccessfully auto-mount my drives when starting up. I've made a script that sets me to the root using "sudo -s" and then mounts the drives. The commands to mount the drives work properly when entered into the command line, but when I try running them from an executable, they don't work. What might I be missing?
I've got a 10.04 server install, on which I installed a basic gnome desktop. But I've never been able to automount usb drives or DVD/CDs!?but seem for desktop. May relate to not having standard gnome install? I don't have users-admin to try that, and don't see install package.
I installed 10.10 on my workstation but now my system refuses to mount two existing two data drives that were already there... sudo mount /dev/sdc /mnt/data-b gives me: mount: unknown filesystem type 'isw_raid_member'
I didn't change any BIOS settings... My BIOS is not configured for RAID at all, that setting reads AHCI, which should be okay for my kernel (using the stock 2.6.35-24).
I tried to force mount one of the drives with sudo mount -t ext3 /dev/sdc /mnt/data-b
but this gives me an even stranger message: "/dev/sdc already mounted or /mnt/data-b busy (neither of them are true...)
It's mainly the "isw_raid_member" thing that troubles me... I didn't and don't have a RAID system at all..