OpenSUSE Hardware :: Running A Software RAID1 On 3xSATA Drives?
Oct 22, 2010
I'm running a software RAID1 on 3xSATA drives. I'd like to make one a spare. How do I do this? Is there a way so that once I do when I have a drive failure the spare will automatically mount and be made a mirror?
I tried setting up my own partition table which apparently didn't go well.I have 1 compactflash-disk for linux and 2 hard drives for data which are set up for RAID1. But the RAID-drives doesn't get mounted.This is my first RAID-setup
Code: me@server:~$ df -h Filesystem Size Used Avail Use% Mounted on
i have 2 shiny new 1TB SATA drives. I have a running system already, basic install on a single disk. install both new drives & raid1 them on a single partition - mount them for storage. Seems simple but all i can find are howto's on instaling the whole system & booting on RAID.
My old workhorse computer's motherboard died yesterday, and I want to get everything off of its RAID1 array. I have a backup on an external drive but it's a few weeks old and I'd like to make sure I've got everything.
The old machine ran Slackware 12.1, and had a 2-drive IDE 250GB RAID1 array with partitions:
md0 - swap md1 - /
The new machine has Slackware 13.1, and also has a 2-drive SATA 250GB RAID1 array with partitions:
md0 - / md1 - swap md2 - /home
I put the IDE drives from the old computer into the new one. I'm not sure how to get the old array going now. I'm not sure if I should use mdadm with --assemble (since the array was already set up before) or with --create (because the array needs to be renamed so it doesn't clash with the new computer's md1). I'm thinking I should use --create and give the old md1 a new name (md3). But I'm not sure if anything bad will happen if I use --create on an array with data on it.
The old drives are the last two entries, sdc and sdd. It's odd that the order is reversed. I had (non-RAID) Windows partitions on both drives and I've mounted them and verified that the drives are on the IDE cable in the right order, and sdc is the original 1st drive of the array, and sdd is the 2nd drive of the array.
I searched, and unfortunately all that did was raise my confusion level on this. "Grub"? "mdadm"? "fake"?, S/Wraid? Disk utility? So many options, so little understanding! Relevant stuff (Mostly reported by "Disk utility"):
- Lucid Lynx. PATA host adaptor -> IDE controller -> Maxtor 164GB H/D [I *think* this has Windoze on it, but can't remember!] There's also a CD & DVD drive on this IDE bus. PATA host adaptor -> SATA controller -> Seagate 500GB H.D. - The Ubuntu boot drive, currently a 250GB ext4 partition, 3GB swap and 250GB "unused".Peripiheral devices -> Firewire 400 -> 2 x Samsung 500GB H/D's. These were "stolen" from my Mac Book and are currently a RAID1 Apple array. Everything they contained is safely backed up, and these can be considered as "new" drives awaiting formatting. [It's actually a Buffalo Drivestation Duo, but their site was even more confusing than here.
everything is working wonderfully, but I'd like to use the 2 F/W drives in a RAID1 array - So, eventually to the question: How do I tell Ubuntu to use these drives as a RAID array? It seems I can format and partition etc from disk utility. Do I then use mdadm for configuration? Any other recommendations?
I'm not entirely a newbie, but this seems like such a simple question I'm not sure where else to ask it. I checked through the various HOWTOs and searched already and didn't find a clear answer, and I want to know for sure before we start investing in hardware. Is is possible to create a RAID1 (mirroring only) array with 3 live drives, rather than with 2 live plus a spare? Our goal is to have 3 drives in a hot-swap bay, and be able to pull and replace one drive periodically as a full backup. If I do:
I have a nas / server running Slackware64-13.37 with two raid arrays, one with 3x500GB disks and the other with 2x2TB disks. I also have 1x80GB hdd which contains the os. I found a spare 250GB hdd from my closet and want to set up raid1 array from os hdd and that spare drive.I know that I can't make use of the extra 170GB of space but I don't care as long as I have working installation on the other disk if that 80GB hdd breaks. How can I set up that kind of raid1 without losing any data?
I cleverly built a machine with two drives mirrored, boot partition on the mirror. It crashed some time when I was not present, and was apparently writing to disk at the time, for since, it will not boot. I get error messages to the effect that there is no boot partition. At this point I am not particularly concerned to try to rebuild this array, though I have read a hell of a lot on the boards about trying that (and am daunted - everyone assumes you can boot the machine with the bad array). Instead, I am wondering is it possible to build a simple machine on a third disk, then "look" at the disks in the former (?) array, and/or potentially rebuild it?
I fail to install OpenSuse 11.3 with Raid-1. What I do: Partitioning -> Expert. sda1 type RAID do not format / mount, sda2 /(root) type RAID do not format / mount, sda3 /home type RAID do not format/mount. Clone disk to sdb. Raid -> add md126p1 type swap mount to swap, add->md126p2 type etx4 mount to /(root), md126p3 type ext4 mount to /home. Bootloader: GRUB, boot from MBR enabled, boot from / disabled. After installation the system does not boot and grub reports error that the specified filesystem can not be found.
opensuse 11.2 64 bits installed in a RAID1 software array. Everything worked smooth during installation, the system boots, the mdx partitions are in their places, did the upgrades, configured the way i wanted, booted several times... all ok. Then i wanted to test the RAID and since its RAID1 i just disconnected 1 drive and started the computer. All what i got is
Disconnect the other drive, then i got GRUB loading stage 1.5 GRUB loading, pleas wait... (hd1,0)/message: file not found and a simple boot menu (in character mode) shows up with the 2 options SUSE LINUX and Failsafe -- SUSE LINUX (no version number)
I'm convinced that mdadm is going to be the death of me. I've wasted numerous hours on this so far without luck.
OpenSuse 11.4 on an old Supermicro box, creating a software RAID1 array across 2 x IDE 500GB disks. Creating /dev/md0 as a 250MB partition across /dev/sda1 and /dev/sdd1 for /boot, another 465GB partition across /dev/sda2 and /dev/sdd2 as an LVM partition to hold volumes for the various other OS filesystems. After the initial installation and configuration there were a series of mishaps with faulty IDE cables that had drives failing to show up at boot. Somehow, /dev/sdd2 got configured to array /dev/md1 as a spare drive. And nothing I've done so far gets it to show up as an active drive.
The obvious step of failing the partition, removing it, then adding (or re-adding) will bring it back as a spare. I've tried roughly a dozen different permutations of those same steps. The latest was to 'dd if=/dev/zero of=/dev/sdd2' to clear the partition. Thought this might be the trick - after the zero, mdadm -E /dev/sdd2 reported 'no superblock' and no md1 configuration.
So 'mdadm --add /dev/md1 /dev/sdd2' and it still comes back as a spare. Here is mdadm -D /dev/md1
/dev/md1: Version : 1.0 Creation Time : Sat Jul 9 10:26:01 2011 Raid Level : raid1 Array Size : 488119160 (465.51 GiB 499.83 GB) code....
I can't stop this array, the OS is running from there. I can't easily boot from CD to repair, all IDE ports have disks attached.
Does anyone have an incantation to promote a spare to active?
I recently finished installing Fedora 9 on a Prolient ML 330 G6 Server, but i configured the SATA hard drives to be viewed as four seperate hard drives. I was asked to merge the drives to be seen as one 800GB hard drive, my biggest fear is that we had set up Samba to share folders between fedora 9 giving specific users access to specific files saved on the Prolient server, will those settings be lost.And could you call that a File Server or do you have to enter any more settings And also if anyone could point me to a tutorial on Logical Volume Management and Raid specifically for fedora 9
upgraded from karmic through update managerANDnone of of my external drives cd drive or flash drives are picked upad to go back to karmic and will remain there for a whil
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
so I setup a raid ten system and I was wondering what that difference between the active and spare drives is ? if I have 4 active drives then 2 the two stripes are then mirrored right?
I am building a home server that will host a multitude of files; from mp3s to ebooks to FEA software and files. I don't know if RAID is the right thing for me. This server will have all the files that I have accumulated over the years and if the drive fails than I will be S.O.L. I have seen discussions where someone has RAID 1 setup but they don't have their drives internally (to the case), they bought 2 separate external hard drives with eSata to minimize an electrical failure to the drives. (I guess this is a good idea)I have also read about having one drive then using a second to rsync data every week. I planned on purchasing 2 enterprise hard drives of 500 MB to 1 GB but I don't have any experience with how I should handle my data
I recently had issues with the latest version of the Linux Kernels and I got that fixed but ever since that has happened none of my Drives will mount and they aren't even recognized.
I suspect this is not new but I just can't find where it was treated. Maybe someone can give me a good lead.I just want to prevent certain users from accessing CD/DVD drives and all external drives. They should be able to mount their home directories and move around within the OS but they shouldn't be able to move data away from the PC. Any Clues?
i have recently setup and installed Ubuntu 9.04 on a virtulal drive usingVMWare 6.04, installed the desktop gui as well, I need to add other drives for data and loggng, which I did in the VMWare side. I can see the 2 drives in ubuntu, but can not access them, I get he unable to mount location when I try. How can resolve this please as I need these to virtual drives to be used as data drives.
I've used it once before but got fed up with the boot asking me everytime I turned my laptop on because I wasn't using it enough. I have Windows 7 on drive C . I want to keep it on drive C. I have several 1.5TB+ drives, and one of them is not being used. I want to dedicate it to Ubuntu, and be able to do a dual boot with my Windows 7 install. Is this possible? If it is, what about when this drive is not connected to my laptop? Will that mess up the boot process?
So, at the moment I have a 7TB LVM with 1 group and one logical volume. In all honesty I don't back up this information. It is filled with data that I can "afford" to lose, but... would rather not. How do LVMs fail? If I lose a 1.5TB drive that is part of the LVM does that mean at most I could lose 1.5TB of data? Or can files span more than one drive? if so, would it just be one file what would span two drives? or could there be many files that span multiple drives drives? Essentially. I'm just curious, in a general, in a high level sense about LVM safety. What are the risks that are involved?
Edit: what happens if I boot up the computer with a drive missing from the lvm? Is there a first primary drive?
I have a few hard drives that I connect to my system with an usb to ide cord. some of the drives mount right away but some others don't below example.
Oct 24 11:10:04 linux-b21t kernel: usb 1-3: new high speed USB device using ehci_hcd and address 14 Oct 24 11:10:04 linux-b21t kernel: usb 1-3: configuration #1 chosen from 1 choice Oct 24 11:10:04 linux-b21t kernel: scsi15 : SCSI emulation for USB Mass Storage devices Oct 24 11:10:04 linux-b21t kernel: usb-storage: device found at 14
I have seven drives on the dolphin file browser left bar, I don't need them, all of them are system mounted, what I would like is just to have a shortcut or bookmark for one of the drives, how can I do this.
i jus migrated from Ubuntu to openSUSE 11.1 on my desktop. N nt was i disappointed by the automatic mounting of pendrive/cdrom, I really can't get it working
I just installed openSUSE 11.2, but I can't use my two optical disk drives /dev/sr0 and /dev/sr1 because the operating system can't find them. These drives aren't listed in my fstab file:
When my husband and I installed Open SuSE 11.2, we made the mistake of telling it to have my other 2 hard drives owned by root. So now, whenever I want to open my other 2 hard drives, I have to type in the root password. How can I change this?
OpenSuse 11.2 64bit When I select a hard drive in Dolphin file manager it asks for the root password. I would like to gain easier access to the drives. The Yast Partition Manager lists all of the drives and has a dialog box to change this i.e. user can mount the drive. Can we change this feature on the run, while the system is running ? The Fstab file is not listing all of the drives, so I cannot just edit the config here.