Have just installed Ubuntu Server 10.04 on a dell with a perc 700 raid card. Running a mirrored raid1 (2x 500G as one drive). All went well. Have installed a basic Gnome front end and Webmin. I want to partition the hard drive into system and data storage areas. What is the best way to do this with a mirrored RAID system?
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I was wondering what is the proper way to setup a hardware based mirrored raid. I have two 2TB drives and a nvidia based raid on the motherboard. I used the nvidia raid manager to setup a Mirrored array consisting of those two drives. The total shows as 1.81TB array.
I boot into OpenSuSe 11.3 and in the partitioner I see two drives (dev/sda and dev/sdb each 1.82TB) listed instead of a single RAID drive. Am I doing something incorrectly that two drives show up instead of the array? Does something need to be enabled?
I've installed Debian Squeeze twice using CD1. First time with a high-speed internet connection using a mirror, 2nd time without. With a mirror, much more is installed, and clearly a much more complete installation. What packages would I install to make a basic CD install more like a mirrored install? Is there a list on the CD somewhere?
I'm using a Intel mac pro running Ubuntu Server 10.4 64bit, and I have it working. Currently, I'm just using Ubuntu using bootcamp, and not using EFI. The OS drive is 250gb which I know is more than enough, but it was what I had free at the time. I added 3 1tb drives to the computer, but not sure how to create a raid with them. I've done some searching, but still haven't been able to get it done successfully.
I have 10.04.1 on my server with a 250gb sata drive. I have all my files on this hard drive. I'm running out of space so I have another 250gb sata drive I need installed. I want to create raid 0 so I can expand my servers hard drive space. I don't want to lose my data on original hard drive or reinstall to create the raid. Is there a way to achieve this with mdadm without altering the first hard drives data?
I am trying to create a new mdadm RAID 5 device /dev/md0 across three disks where such an array previously existed, but whenever I do it never recovers properly and tells me that I have a faulty spare in my array. More-specific details below. I recently installed Ubuntu Server 10.10 on a new box with the intent of using it as a NAS sorta-thing. I have 3 HDDs (2 TB each) and was hoping to use most of the available disk space as a RAID5 mdadm device (which gives me a bit less than 4TB.)
I configured /dev/md0 during OS installation across three partitions on the three disks - /dev/sda5, /dev/sdb5 and /dev/sdc5, which are all identical sizes. The OS, swap partition etc. are all on /dev/sda. Everything worked fine, and I was able to format the device as ext4 and mount it. Good so far.
Then I thought I should simulate a failure before I started keeping important stuff on the RAID array - no point having RAID 5 if it doesn't provide some redundancy that I actually know how to use, right? So I unplugged one of my drives, booted up, and was able to mount the device in a degraded state; test data I had put on there was still fine. Great. My trouble began when I plugged the third drive back in and re-booted. I re-added the removed drive to /dev/md0 and recovery began; things would look something like this:
I have never preformed a rebuild of an RAID array. I am collecting resources, which details how to build an RAID 5 array when one drive has failed. Does the BIOS on the RAID controller card start to rebuild the data on the new drive once it is installed?
I was trying to update the drive for my adaptec raid controller. Unfortunately, Adaptec only provides RPM packages. So I converted the package using alien. After dpkg install, I then tried using dkms to build the module:
Code: root@atulsatom# dkms add -m aacraid -v 1.1.5.26400 Adding driver was successful, but I got some error during the build Code: root@atulsatom# dkms build -m aacraid -v 1.1.5.26400 Kernel preparation unnecessary for this kernel. Skipping.
how can I create RAID 1+0 using two drives (one is with data and second one is new). Is it possible to synchronize data drive with empty drive and create RAID 1+0 ?
I own an Iomega NAS enclosure. It is basically a card with 2 sata drives attached and a network card that runs an small web interface. To make a long story short. The controller card has went bad. One drive dropped out of the mirrored set.... and then came back in and attempted to rebuild. Several hours later the drive dropped out again. I attempted to replace the drive and the controller would not rebuild at all.
To summarize. I have accepted that the controller card is toast and want to proceed assuming one drive is compromised and the other still has salvageable data (About 18 GB needs recovered) I took the presumed "good" drive.. attached it to a standalone enclousre that takes it from sata to USB.When I look under disk manager (using Ubuntu 10) I see the following appear
1) the multi Disk device appears indicating it is part of a logical drive. Indented from that is
2) The "Array" Icon
3) The USB icon indicating a "peripheral device" Indented from that is
4) the Drive as a 500 GB hard disk (correct)
when i click on 4) I see 2 volumes 1 1GB volume with partition type Linux (0x83) and a 499 GB partition with the same type (0x83) However..because it thinks it is part of an array it is not mounting. HOW do I tell this drive "you are not in an mirrored array anymore...mount as a single drive"
How long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?
my linux server is running with an old IDE hard drive getting these hdparm results:
[code]...
i have a WD Raptor drive i'm going to install and put a fresh install of linux on it. i'm just curious, will using a much faster HD as my main drive increase the speeds of my network transfers from the raid drive? do transfers only go as fast as the system drive?
I've never worked with a software raid before, and I was a little worried about getting the hand of it before actually relying on it. So what I decided to do is to create a raid 5 on my Fedora 12 installation, and then install fedora 14 and see if the raid 5 was still in tact on the OTHER 4 disks that were NOT a part of the OS drive. After installing Fedora 14 I noticed that the raid5 was broken, and I googled for hours and even started a thread here, but could not get it working again.
I decided to start from scratch, and just delete all of the partitions and the raid, and do a new raid5 using 4 disks that are 2tb each. I googled for a long time, and tried using both the cli (mdadm) and the gui (disk utility), but I was not able to successfully get a new raid5 working? I've tried various examples, to no avail. Any link to a difinitive resource on how to setup a raid5? I don't just mean the commands for mdadm, but also the actual disk formatting that's required before setting up a raid, if it's even required. I say thing for 2 reasons:
1) In fedora 12, using disk-utility, I didn't have to format the drives first, or anything else. I just selected 4 drives, said to make them a raid 5, and then formatted them as ext4. But many examples I've seen says that the drives must already be formatted, and you use partition #1 of each drive to setup the raid, whereas when I did it, I didn't have any partitions (they were new drives), and I could format after. 2) One example I've found says to use ext*, whereas another example says to use xfs. I don't feel this matters, and is only a matter of choice/taste, right?
I have three hard drives in my computer That I want to make RAID 0. All of them already have partitions and data on them. What I want to know is if I can, without losing data, add the disks to RAID and then merge the partitions? All the partitions are of the same type. Or would it easier/better/possible to do this with LVM? Even if I'd have to shrink partitions and copy data to a new LVM one to get it set up properly, would it be better than RAID 0?
I have a raid 1 array which I did setup on a Centos box. I can configure the array fine the problem is every time is restart my machine I can no longer see the array and have to go create it all over again. I tried doing a few searches on it but have came up with nothing so far.
I have a software RAID array using mdraid that consists of two 1.5TB drives that I use for storage, the array is mounted at /Storage. I am running out of space in the array so I ordered two more 1.5TB drives to create a 4 drive RAID 1+0 array which will be 3TB big. My question is how do I create the new array and not lose any data?
The drives and partitions are sdc1, sdd1, and soon to be sde1, sdf1. I currently have 4 RAID arrays (md0,md1,md2,md3). I think I can create the RAID 1+0 array with the two new drives, copy the data from my current array to the new one, remove the old array, then add the two original drives to the new array. But I wanted to ask on here first to make sure my data doesn't go poof.
I've got 3 extra disks on OpenSuse 11.2 - all the same size. I've created a partition on all of them as type 0xFD. If I then try and add raid in yast I get "There are not enough suitable unused devices to create a RAID."
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1and I getmd1: raid array is not clean -- starting background reconstructionWhy is it not clean?Should I be worried?The HD is not new it has been used in before in a raid array but has beenrepartitionated.
what do I have:2x 150GB drives (sda) on a raid card (raid 1)for the OS (slack 13.37)2x 2TB drives (sdb) on that same raid card (raid 1, too)2x 1.5TB drives (sdc,sdd) directly attached to MoBo2x 750GB drives (sde,sdf) attached to MoBo too.if i got about it the normal way, i'd create softRAID 1 out of the the 1.5TB and the 750GB drives and LVM all the data arrays (2TB+1.5TB+750GB) to get a unified disk.If I use btrfs will I be able to do the same? mean I have read how to create raid arrays with mkfs.btrfs and that some lvm capability is incorporated in the filesystem. but will it understand what I want it to do, if i just say
I'm working in a little company and 2 weeks ago one of our server had a hard disk failure (yes it was a seagate 11) and after passed two days without sleep trying to recover everything (and we did it!!) we took the decision now to use in some of our server a raid sw, so if one HD fail we can continue with our system without losing nothing. Yes I know normally you have to take all the precautions before so this things never happen, but you know I thing if it never arrives you, you always think than you're lucky and it's never going to happen to you but one day you discover reality.
So now this server is working with a Centos and the default HD partitions one boot partition and the LVM. I'm reading everything I'm finding about raid sw and lvm but I don't find if it's possible to create now with the system working a raid sw without having to reinstall all the system. Is it possible to do it ? If not what are my options to make a system backup before reinstalling everything?
I want to create a file-server with Ubuntu and have two additional hard drives in a RAID 1 setup. Current Hardware: I purchased a RAID controller from [URL]... (Rosewill RC-201). I took an old machine with a 750GB hard drive (installed Ubuntu on this drive). I installed the Rosewill RAID card via PCI port. Connected two 1TB hard drives to the Rosewill raid card. Went into the RAID bios and configured it to RAID 1.
My Problem: When I boot into Ubuntu and go to the hard drive utility (I think that's what its called). I see the RAID controller present with two hard drives configured separately. I format and tried varies partition combination and at the end of the day I see two separate hard drives. Just for giggles, I also tried RAID 0 to see if they would combine the drives.
I often use dvdbackup to mirror dvd's to my harddrive. I used to use k3b to burn them to disks. The problem is I no longer use kde, and I realy hate installing kde apps these days, since they seem totaly bloated. Brassero isn't smart enough to take a directory with a dvd's file structure mirrored into it, and burn this as a video dvd. Is there a way to convert such a directory into an iso from the command line so brassero can handle it?
Q1) I was wondering if it is possible to Dual boot Ubuntu with Windows XP on a 1TB RAID-0 setup ?
Q2) Also, is it possible to create a SWAP partition (for Ubuntu) on a NON RAID-0 HDD ?
Q3) Lastly... I read GRUB2 is the default boot manager... should I use that, or GRUB / Lio ?
I have a total of 3 HDDs on this system: -- 2x 500GB WDD HDDs (non-advanced format) ... RAID-0 setup -- 1x 320GB WDD HDD (non RAID setup) (The non RAID HDD is intended to be a SWAP drive for both XP and Ubuntu = 2 partitions)
I plan on making multiple partitions... and reserve partition space for Ubuntu (of course).
I have the latest version of the LiveCD created already.
Q4) Do I need the Alternate CD for this setup?
I plan on installing XP before Ubuntu.
This is my 1st time dual booting XP with Ubuntu.
I'm using these as my resources: - [url] - [url]
Q5) Anything else I should be aware of (possible issues during install)?
Q6) Lastly... is there anything like the AHCI (advanced host controller interface) like in Windows for Ubuntu?
(Since I need a special floppy during Windows Install...) I want to be able to use the Advanced Queuing capabilities of my SATA drives in Ubuntu.
I have 4 1.5 TB Seagate drives configured in RAID 5 via mdadm on Karmic. It seems that after a while, one drive always drops out of the array. It's a brand new drive, and after a reboot, it will come back in and rebuild just fine, so somehow I doubt the drive is actually failing. Here's a dmesg snip. The mounting that happens at the top is the mounting of the array, and as you can see, after a while, there is some kind of write failure.
[70178.385356] EXT3 FS on md0, internal journal [70178.385373] EXT3-fs: mounted filesystem with writeback data mode. [95234.954141] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [95234.954160] ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 [95234.954162] res 40/00:00:c6:66:a8/00:00:ae:00:00/e0 Emask 0x4 (timeout) [95234.954168] ata5.00: status: { DRDY }