Hardware :: RAID 1 - Setup More Than One Partition On Clean Drives?
Feb 24, 2011
I've finally found a couple of useful tutorials on setting up RAID in Linux. However, because this is new ground to me, I have a couple of basic questions which I think the tutorial writers gloss over because of their familiarity with the process. My questions are these:
1. Most tutorials speak about setting up only one partition on clean drives. Can I set up more than one (e.g. / and home) to be mirrored as two partitions?
2. When starting with two identical clean drives, do I need to set up my partitions identically on both drives or is it only the partitions that I want mirrored to the second drive?
I have a total of 4 hdd's, 500gb 7.2rpm that I would like mirrored using raid 10. As you can see from the image, ubuntu 9.10 server isn't recognizing the full 2tb's. In fact, I'm not even sure about the configuration as I was thinking the HDD's would come up as four 500gb hdds. Instead I have the configuration above set and ready for Ubuntu to be installed on.
1. Is this typical of a server pre-configured from Dell(perc6 raid controller.
2.Why is ubuntu not recognizing the full capacity of the drives especially when it's a server install?
I have been battering with FC10 and software RAID for a while now, and hopefully I will have a full working system soon. Basically, I tried using the F10 live CD to setup Software RAID 1 between 1 hard drives for redundancy (I know its not hardware raid but budget is tight) with the following table;
[Code]....
I set these up by using the RAID button on the partition section of the install except swap, which I set-up using the New partition button, created 1 swap partition on each hard drive that didn't take part in RAID. Almost every time I tried to do this install, it halted due to an error with one of the raid partitions and exited the installer. I actually managed it once, in about...ooo 10-15 times of trying but I broke it. After getting very frustrated I decided to build it using just 3 partitions
[Code]....
I left the rest un-touched. This worked fine after completing the install and setting up grub, reboot in to the install. I then installed gparted and cut the drive up further to finish my table on both hard drives. I then used mdadm --create...etc to create my RAID partitions. So I now have
I just went out and bought stuff to build a new computer, and among the parts was a Gigabyte ga-890fxa-ud5 motherboard ([URL]). The board has 3 (well, 4, but we'll stick to the 3) main sata interfaces, with 2 slots per interface, allowing 6 sata drives. In slot_0 i put my blu-ray drive, in slot_1 i put my drive that will host the OS and its partitions, and that is in the sata connector pair on the left. The middle sata connector pair (slot_2 and slot_3) i have 2 2tb drives, and in the sata connector pair on the right (slot_4 and slot_5) i have 2 1.5tb drives.
i have cretaed RAID on one of my server RAID health is ok but its shows warning. so what could be the problem. WARNING: 0:0:RAID-1:2 drives:153GB:Optimal Drives:2 (11528 Errors)
This is a strange problem. I have Ubuntu server installed on a proper server hardware. My RAID card reports all four HDDs to ubuntu as single drives, which is how i set it up because Ubuntu does not recognize the raid card on the server. Now you might say if thats the case, why dont i remove the raid card and have the BIOS report to ubuntu as four single drives then i can perhaps setup software raid. Well my board has only one sata port.Ubuntu is all setup. on the first drive and i have set the other three up using software RAID.
System works great only problem is it freezes sometimes. Not everytime, just on the odd occassion I use the same Hardware without the raid card and of course just one HDD and it great. No freezes.That leads me to believe its the RAID card.My question is why will it run great for days and sometimes just freeze on me? Probably silly but if theres an issue with the RAID card, it should not work at all, should it?
Q1) I was wondering if it is possible to Dual boot Ubuntu with Windows XP on a 1TB RAID-0 setup ?
Q2) Also, is it possible to create a SWAP partition (for Ubuntu) on a NON RAID-0 HDD ?
Q3) Lastly... I read GRUB2 is the default boot manager... should I use that, or GRUB / Lio ?
I have a total of 3 HDDs on this system: -- 2x 500GB WDD HDDs (non-advanced format) ... RAID-0 setup -- 1x 320GB WDD HDD (non RAID setup) (The non RAID HDD is intended to be a SWAP drive for both XP and Ubuntu = 2 partitions)
I plan on making multiple partitions... and reserve partition space for Ubuntu (of course).
I have the latest version of the LiveCD created already.
Q4) Do I need the Alternate CD for this setup?
I plan on installing XP before Ubuntu.
This is my 1st time dual booting XP with Ubuntu.
I'm using these as my resources: - [url] - [url]
Q5) Anything else I should be aware of (possible issues during install)?
Q6) Lastly... is there anything like the AHCI (advanced host controller interface) like in Windows for Ubuntu?
(Since I need a special floppy during Windows Install...) I want to be able to use the Advanced Queuing capabilities of my SATA drives in Ubuntu.
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
so I setup a raid ten system and I was wondering what that difference between the active and spare drives is ? if I have 4 active drives then 2 the two stripes are then mirrored right?
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1and I getmd1: raid array is not clean -- starting background reconstructionWhy is it not clean?Should I be worried?The HD is not new it has been used in before in a raid array but has beenrepartitionated.
So I didn't notice when I setup my CentOS 5.5 server that I left / as RAID 0 on md1. All the rest are RAID 1. Is there a way I can modify the array to RAID 1 without a risk of data loss? I'm glad I caught this before I setup any other services. I've only setup smb so far...
[root@ftpserver ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 16G 3.0G 13G 20% /
I'm running Karmic Server with GRUB2 on a Dell XPS 420. Everything was running fine until I changed 2 BIOS settings in an attempt to make my Virtual Box guests run faster. I turned on SpeedStep and Virtualization, rebooted, and I was slapped in the face with a grub error 15. I can't, in my wildest dreams, imagine how these two settings could cause a problem for GRUB, but they have. To make matters worse, I've set my server up to use Luks encrypted LVMs on soft-RAID. From what I can gather, it seems my only hope is to reinstall GRUB. So, I've tried to follow the Live CD instructions outlined in the following article (adding the necessary steps to mount my RAID volumes and LVMs). [URL]
If I try mounting the root lvm as 'dev/vg-root' on /mnt and the boot partition as 'dev/md0' on /mnt/boot, when I try to run the command $sudo grub-install --root-directory=/mnt/ /dev/md0, I get an errors: grub-setup: warn: Attempting to install GRUB to a partition instead of the MBR. This is a BAD idea. grub-setup: error: Embedding is not possible, but this is required when the root device is on a RAID array or LVM volume.
Somewhere in my troubleshooting, I also tried mounting the root lvm as 'dev/mapper/vg-root'. This results in the grub-install error: $sudo grub-install --root-directory=/mnt/ /dev/md0 Invalid device 'dev/md0'
Obviously, neither case fixes the problem. I've been searching and troubleshooting for several hours this evening, and I must have my system operational by Monday morning. That means if I don't have a solution by pretty early tomorrow morning...I'm screwed. A full rebuild will by my only option.
I've been all afternoon trying to install Ubuntu Lucid on my fakeRAID 0 configured (2) HDDs and am unable to set GRUB up. The fake RAID setup is provided by Intel Matrix Storage Manager, it is correctly enabled and the BIOS is also correctly set up -- in fact, I've managed to install Windows 7 with no significant hitch. After struggling with partioning the drives (had to follow advice I found on a very helpful guide online [0]), creating the filesystems AND getting Ubuntu's installer to actually do what it is supposed to do, I now cannot seem to set GRUB up. My system, as it stands, is unbootable at all; via live CD only.
This is how the RAID0 dev is partitioned: Code: # fdisk -l /dev/mapper/isw_ecdeiihbfi_Volume0 Disk /dev/mapper/isw_ecdeiihbfi_Volume0: 1000.2 GB, 1000210694144 bytes 255 heads, 63 sectors/track, 121602 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 131072 bytes / 262144 bytes Disk identifier: 0x6634b2b5 .....
I installed Redhat Enterprise 3 on one of my servers. In my haste I didn't properly partition both Hard Drives and only properly partitioned one of them. Thus now I have
Where /dev/sda1 is actually a 80 GB hard drive. Is there anyway I can safely and easily repartition the unpartitioned space without causing a huge mess? I have a very important Oracle database on /dev/sdb1 and thus I want to be able to back it up on the second disk. I can create a partition on that drive?
I have a RAID 1 that is mounted and working. But for some reason I can also see the individual drives under gnome Devices on gnome-shell. Is there a way to hide them from gnome or linux in general. (So only the raid 1 can be seen)
I was recently given two hard drives that were used as a raid (maybe fakeraid) pair in a windows XP system. My plan was to split them up and install one as a second HD in my desktop, and load 9.10 x64 on it, and use the other for mythbuntu 9.10. As has been noted elsewhere, the drives aren't recognized by the 9.10 installer, but removing dmraid gets around this, and installation of both ubuntu and mythbuntu went fine. On both systems after installation however, the systems broke during update, giving an "read-only file system" error and no longer booting.
Running fsck from the live cd gives the error: fsck: fsck.isw_raid_member: not found fsck: Error 2 while executing fsck.isw_raid_member for /dev/sdb and running fsck from 9.04 installed on the other hard drive gives an error like:
The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device>
In both cases I setup the drives with the ext4 filesystem. There's probably more that I'm forgetting... it seems likely to me that this problem is due to some lingering issue with the RAID setup they were in. I doubt its a hardware issue since I get the same problem with the different drives in different boxes.
I want to make a RAID5 array with 4 2TB hard drives. One of the drives is full of data so I will need to start with a 3 disks and then once I copy the data from the 4th onto the array, I will then add the 4th drive. This will be my first experience with RAID. I've spent a few hours searching for info but most of what I have found is a bit over my head.
I have recently installed a Asus M4A77TD Pro system board which supports raid.
I have 2 x 320gb sata drives I would like to setup raid-1 on. so far i have configured the bios to raid-1 for drives, but when installing Ubuntu 10.04 from the cd it detects the raid configuration but fails to format.
When I re-set all bios settings to standard sata drives ubuntu installs and works as normal but i have just 2 x drives without any raid options. I had this working in my previous setup but thats because i had the o/s on a sepreate drive from the raid and was able to do this within Ubuntu.
I've got a 10.10 installation, which I am using as a media/download server. Currently everything is stored on a 1TB USB drive.With the costs of disks falling, and the hassle of trying to back 1TB up to DVD (no, it's not going to happen) I was wondering if there's some linux/Ubuntu utility, which can use multiple disks to provide failover/resilience ... Could I just buy another 1TB drive, and have it "shadowing" the main, so that if one goes, I buy another, and then restore from the copy ?
I have a RAID 6 built on 6x 250GB HDDs w/EXT4. I will be upgrading the RAID to 4 2TB HDDs.
How would one go about this? What commands would need to be ran? I'm thinking about replacing the drives 1 at a time and letting it do the rebuild, but I know that would take a lot of time (which is fine). I don't have enough SATA ports to setup the new RAID and copy things over.
I have used SUse some time now, and I return now to Ubuntu. In Yast cron jobs can be edited easily in order to keep the tmp-partition clean. I would like to do the same in ubuntu, as I know a full tmp partition prevents the system from booting. So, how to do it? I have tmpreaper installed, but this soft is not as handy as Yast. Tmpreaper.conf can indeed be edited, but I have no idea how. It is always "read only".
I have a scenario.A domain [URL].. then there are 4 private computers on which applications are hosted at port 80. So when some one from outside access the site it look [URL]..I added
I know I can simply create a degraded raid array and copy the data to the other drive like this: mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
But I want the specific disk to keep the raw ext3 filesystem so I can still use it from FreeBSD. When using the command above the disk will be a raid disk and I can't do a mount /dev/sdb1 anymore. A little background info. The drives in question are used as backup drives for a couple of Linux and FreeBSD servers. I am using the Ext3 filesystem to make sure I can quickly recover the data since both FreeBSD and Linux can read from that without problems. If someone has a different solution for that (2 drives in raid 1 that are readable by FreeBSD and writeable by Linux),
I shall start off by saying that I have just jumped from Windows 7 to Ubuntu and am not regretting the decision one bit. I am however stuck with a problem. I have spent a few hours google'ing this and have read some interesting articles (probably way beyond what the problem actually is) but still don't think I have found the answer.I have installed:
I am running the install on an old Dell 8400. My PC has an Intel RAID Controller built into the MB. I have 1 HDD (without RAID) (which is houses my OS install) and then I have 2 1TB drives (These are just NTFS formatted drives with binary files on them nothing more.) in a RAID 1 (Mirroring) Array. The Intel RAID Controller on Boot recognizes the Array as it always has (irrespective of which OS is installed) however, unlike Windows 7 (where I was able to install the Intel RAID controller driver) .Does anyone know of a resolution (which doesn't involve formatting and / or use of some other software RAID solution) - to get this working which my searches have not taken me too?