Installation :: Hardware RAID 0 Still Show Up As 2 Drives / Cause Of It?
Jul 29, 2010My two 400G drives which have been added to a single 800G array (per fastbuild bios utility) still show up as two 400G drives in fdisk. Why is that?
View 1 RepliesMy two 400G drives which have been added to a single 800G array (per fastbuild bios utility) still show up as two 400G drives in fdisk. Why is that?
View 1 Repliesi have cretaed RAID on one of my server RAID health is ok but its shows warning. so what could be the problem. WARNING: 0:0:RAID-1:2 drives:153GB:Optimal Drives:2 (11528 Errors)
View 1 Replies View RelatedOK, I obviously did something wrong, but I dont think I did because i checked my kickstart file after installing.
When i do a df -h it shows that drives i set to 4096MB and 6144MB when I just installed the server, are showing as 3.9GB and 5.9GB respectively.
This is my first Linux install, does it not do the # x 1024 for partition sizes?
I was recently given two hard drives that were used as a raid (maybe fakeraid) pair in a windows XP system. My plan was to split them up and install one as a second HD in my desktop, and load 9.10 x64 on it, and use the other for mythbuntu 9.10. As has been noted elsewhere, the drives aren't recognized by the 9.10 installer, but removing dmraid gets around this, and installation of both ubuntu and mythbuntu went fine. On both systems after installation however, the systems broke during update, giving an "read-only file system" error and no longer booting.
Running fsck from the live cd gives the error:
fsck: fsck.isw_raid_member: not found
fsck: Error 2 while executing fsck.isw_raid_member for /dev/sdb
and running fsck from 9.04 installed on the other hard drive gives an error like:
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
In both cases I setup the drives with the ext4 filesystem. There's probably more that I'm forgetting... it seems likely to me that this problem is due to some lingering issue with the RAID setup they were in. I doubt its a hardware issue since I get the same problem with the different drives in different boxes.
I have recently installed a Asus M4A77TD Pro system board which supports raid.
I have 2 x 320gb sata drives I would like to setup raid-1 on. so far i have configured the bios to raid-1 for drives, but when installing Ubuntu 10.04 from the cd it detects the raid configuration but fails to format.
When I re-set all bios settings to standard sata drives ubuntu installs and works as normal but i have just 2 x drives without any raid options. I had this working in my previous setup but thats because i had the o/s on a sepreate drive from the raid and was able to do this within Ubuntu.
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
so I setup a raid ten system and I was wondering what that difference between the active and spare drives is ? if I have 4 active drives then 2 the two stripes are then mirrored right?
root@wolfden:~# cat /proc/mdstat
Personalities : [raid0] [raid10]
md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
[code]....
I can't finish installing Ubuntu 10.04. I have a Corsair Nova 128 GB solid state drive, the 128 GB model of this series, to be precise. I have an ASUS P5N-T Deluxe motherboard. It has a six xSATA 3 Gb/s ports NVIDIA MediaShield RAID controlelr on it. In Disk Utility, it shows up as nVidia Corporation MCP55 SATA Controller, using the sata_nv driver. I have the 128 GB drive plugged into one of these ports. I have disabled RAID completely in the BIOS.
When I boot the Live CD and run GParted, the drive shows up fine. I've got a 70.81 GiB NTFS partition (Windows 7), a 4.00 GiB swap partition, and a 44.43 GiB ext4 partition, onto which I had planned to install Ubuntu 10.04. I created the swap and ext4 partitions earlier, but at one point, that was unpartitioned space.
The problem is that when I run the Install app to install Ubuntu 10.04 on my SSD, when I get to step 4 of 8 (Prepare partitions), exactly zero drives show up. A while ago, I had a couple of Seagate 1.5 TB mirrored drives plugged into two of the ports, and they showed up fine, but no 128 GB SSD. In trying to troubleshoot this problem, I unplugged those and just left the SSD, and zero drives showed up. I disabled RAID in the BIOS, and it still doesn't show up. I moved the SSD from the port it was plugged into and plugged it into one of the ports that one of the 1.5 TB drives was plugged into. It still doesn't show up.
Nuts and bolts of it: Seagate 1.5 TB drive plugged into port: no problem. Corsair Nova 128 GB SSD plugged into port: doesn't show up. But again, only in the Install utility. In Disk Utility under SATA Host Adapter/MCP55 SATA Controller, I see 128 GB Solid-State Disk/ATA Corsair CSSD-V128GB2 listed plain as day. In GParted, I see a 119.24 GiB /dev/sda device with the partitions I've created on it plain as day. I can pull up a terminal and mount /dev/sda3 /mnt/ssd without any problem and access files on it. But for whatever reasons, the Install program and only the Install program can't see it. I'm dead in the water. Obviously, I can't use Ubuntu if I can't install it. What can I do to get this disk drive detected?
[Code]....
how to migrate my whole server to larger hard drives (i.e. I'd like to replace my four 1TB's with four 2TB's, for a new total of 4TB instead of 2TB)... I'll post the output from everything (relevant) that I can think of in code tags below.
I'd like to end up with much larger /home and /public partitions. When I first set up raid and then LVM it seemed like it wouldn't be too hard once this day arrived, but I've had little luck finding help online for upgrades and resizing versus simply rebuilding from a failure. Specifically, I figure I have to mirror the data over to the new drives one at a time, but I can't figure out how to build the raid partitions on the new disks in order to have more space (yet mirror with the old drive that has a smaller partition)... don't the raid partitions have to be the same size to mirror?
Ubuntu Server (karmic) 2.6.31-22-server #65-Ubuntu SMP; fully updated
Code:
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md5 : active raid1 sdd5[1] sdc5[0]
968952320 blocks [2/2] [UU]
[Code].....
So windows wouldn't recognize my drives as a raid setup, so I disabled it and switched to IDE, now Ubuntu 9.10 installation will only recognize my drives as RAID. I have and ASUS M3A32-MVP Deluxe Series motherboard, it has 4 sata connectors and 2 marvel controlled sata connectors. In the 4 sata connectors I have my 2 wd 500gb hds, my dvd burner, and my external usb, esata slots. In the marvel controlled sata connector I have a wd 160gb hd. Originally when I built the computer I wanted a raid setup with the 2 500 gb hds.
But windows wouldn't recognize the raid set-up and wouldn't boot properly. So I said screw it and removed the raid and set all the drives to IDE. Then, when I tried to install Ubuntu 9.04 it would only recognize my 2 500 gb hds as raid. Gparted recognizes the drives as both raid and IDE. Eventually, after a day or two of praying and messing around the installer recognized both drives as raid and IDE. A couple months later here I am trying to install Ultimate Edition 1.4.
I just installed Debian 5.0.4 successfully. I want to use the PC as a File Server with two Drives configured as a RAID 1 device. Everything with the RAID device works fine, the only question I have belogs to the GRUB 0.97 Booloader. I would like to be able to boot my Server even if one of the disks fail or the filesystem containing the OS becomes corrupt, so I configured only the data partitions to be a RAID 1 device, so on the second disk should be a copy of the last stable installation, similar to this guide:[URL]...
[Code]...
My System Intel Core Duo E5300 Mobo - Gigabyte G31M-ES2L 1GB DDR2 800MHz RAM 4 x WD20EARS HDD I have been trying to install Fedora or Ubuntu for over a week. I thought it would take an hour and i would be away. I have been trying to install using the mdadm Software RAID feature. Everytime it takes about a day to format the drives and then i get an installation error. The drives state they are ready to use as is on any operating system other then WinXP, but this does not appear to be the case.
I am very new to Fedora... I have been doing some reading.[URL].. That information has been promising. I have been able to get into Fdisk off the live CD but i can't figure out how or if it is possible to do what i want it to.
Has anyone had any luck getting these drives to function correctly in a software RAID? I have had good luck with WD drives in the past and just assumed these drives would do what i wanted to but alas i have been proven wrong.
The partitions i wanted was...
- A 2GB swap parition
- A 10 GB RAID 1 partition for Fedora
- The remaining space as a Raid 5 for files.
Am i just banging my head against a wall here, or is this possible.
I have been battering with FC10 and software RAID for a while now, and hopefully I will have a full working system soon. Basically, I tried using the F10 live CD to setup Software RAID 1 between 1 hard drives for redundancy (I know its not hardware raid but budget is tight) with the following table;
[Code]....
I set these up by using the RAID button on the partition section of the install except swap, which I set-up using the New partition button, created 1 swap partition on each hard drive that didn't take part in RAID. Almost every time I tried to do this install, it halted due to an error with one of the raid partitions and exited the installer. I actually managed it once, in about...ooo 10-15 times of trying but I broke it. After getting very frustrated I decided to build it using just 3 partitions
[Code]....
I left the rest un-touched. This worked fine after completing the install and setting up grub, reboot in to the install. I then installed gparted and cut the drive up further to finish my table on both hard drives. I then used mdadm --create...etc to create my RAID partitions. So I now have
[Code]....
Fedora is having trouble identifying a raid partition, it sees them as separate drives. I got drivers from dell, but during a Fedora dvd install, it mentions nothing about a place to install extra drivers.
When it says it must "initialize" the drives, Fedora then breaks the dell bios raid. How to either install the dell drivers or make Fedora see the raid partition as one?
I'm about to install Ubuntu on two 250-gigabyte hard drives in a RAID 1 array, but I'm confused about how to partition my hard drives. How much space should I give to each partition? How many partitions should I create and where should I mount them? (I should mention that Ubuntu will be the only OS on this array.)
View 3 Replies View RelatedInstalling Ubuntu 10.10 desktop.on a Highpoint rocketraid 2642.Installing Ubuntu, it does not find the drive?How do I install the drivers to install and boot after the installation from the raid drives?
View 1 Replies View RelatedI have two drive on my Ubuntu server. I have a 40GB IDE 5400RPM and an 80GB SATA 7200RPM.
Is it at all possible to RAID 0 the two of these drive? I know it's pretty unorthodox, but it's what I've got without having to buy anything.
If so, what are the limits, cons, and/or potential pros of doing a RAID of these two?
I'm thinking of getting a forensic bridge to help with fixing machines and I was wondering if the support was good for Firewire and if so do they show up in /dev like hda and sda do so I can keep using my normal set of tools.
View 2 Replies View RelatedI have a RAID 1 that is mounted and working. But for some reason I can also see the individual drives under gnome Devices on gnome-shell. Is there a way to hide them from gnome or linux in general. (So only the raid 1 can be seen)
View 2 Replies View RelatedI want to make a RAID5 array with 4 2TB hard drives. One of the drives is full of data so I will need to start with a 3 disks and then once I copy the data from the 4th onto the array, I will then add the 4th drive. This will be my first experience with RAID. I've spent a few hours searching for info but most of what I have found is a bit over my head.
View 9 Replies View RelatedI'm using 4 hard drives (1 of which is a sata drive) and i need help installing raid drivers i cant get these hard drives to mount at all
View 3 Replies View RelatedI've got a 10.10 installation, which I am using as a media/download server. Currently everything is stored on a 1TB USB drive.With the costs of disks falling, and the hassle of trying to back 1TB up to DVD (no, it's not going to happen) I was wondering if there's some linux/Ubuntu utility, which can use multiple disks to provide failover/resilience ... Could I just buy another 1TB drive, and have it "shadowing" the main, so that if one goes, I buy another, and then restore from the copy ?
View 3 Replies View RelatedI have a motherboard with the following chipset Intel 945GM + Intel ICH7R Chipset
This board: http://emea.kontron.com/products/boa...6lcdmmitx.html
I have two 320GB HDDs setup in hardware raid as shown below. But in gparted they are showing as two seperate drives. Why is this?
Raid setup:
GParted
fdisk -l
What am i doing wrong here?
I have a RAID 6 built on 6x 250GB HDDs w/EXT4. I will be upgrading the RAID to 4 2TB HDDs.
How would one go about this? What commands would need to be ran? I'm thinking about replacing the drives 1 at a time and letting it do the rebuild, but I know that would take a lot of time (which is fine). I don't have enough SATA ports to setup the new RAID and copy things over.
For SATA drives in AHCI mode, the names are /dev/sdX, what about a RAID-0 or RAID-1 array of SAS drives? Are they the same?
View 8 Replies View RelatedJust wondering if it is a distro thing or what.
I still can access them from the Places menu but how can I add them back to the desktop when I plug them in or pop in the disk?
Is it possible to show only mounted external volumes on the desktop. I try the method of uncheck /app/nautilus/desktop/volumes_visible. But it hides both internal and external disk.
View 6 Replies View RelatedI know I can simply create a degraded raid array and copy the data to the other drive like this: mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
But I want the specific disk to keep the raw ext3 filesystem so I can still use it from FreeBSD. When using the command above the disk will be a raid disk and I can't do a mount /dev/sdb1 anymore. A little background info. The drives in question are used as backup drives for a couple of Linux and FreeBSD servers. I am using the Ext3 filesystem to make sure I can quickly recover the data since both FreeBSD and Linux can read from that without problems. If someone has a different solution for that (2 drives in raid 1 that are readable by FreeBSD and writeable by Linux),
I shall start off by saying that I have just jumped from Windows 7 to Ubuntu and am not regretting the decision one bit. I am however stuck with a problem. I have spent a few hours google'ing this and have read some interesting articles (probably way beyond what the problem actually is) but still don't think I have found the answer.I have installed:
Distributor ID: Ubuntu
Description: Ubuntu lucid (development branch)
Release: 10.04
Codename: lucid
I am running the install on an old Dell 8400. My PC has an Intel RAID Controller built into the MB. I have 1 HDD (without RAID) (which is houses my OS install) and then I have 2 1TB drives (These are just NTFS formatted drives with binary files on them nothing more.) in a RAID 1 (Mirroring) Array. The Intel RAID Controller on Boot recognizes the Array as it always has (irrespective of which OS is installed) however, unlike Windows 7 (where I was able to install the Intel RAID controller driver) .Does anyone know of a resolution (which doesn't involve formatting and / or use of some other software RAID solution) - to get this working which my searches have not taken me too?
I'm looking for advise on which drives to add into my server for software raid 5. I would like to use 2TB drives for the array. The server currently boots off a RAID 1 array and I have a couple other drives mounted until I build a RAID 5 array with new drives. I've read horror stories on using Western Digital WD20EARS and Seagate ST32000542AS. So I'm wondering which large drives are best to use in software raid?
View 9 Replies View Related