I've tried to install Fedora 11, both 32 and 64 on my main machine.It could not install as it stops on the first install window. I've already filed a bug but really haven't seen any feed back yet.The bug has something to do with Anaconda and the Raid array but I really can't tell.
I have an Intel Board (see signature). I am running intel raid software under W7 currently.It works fine. But, I'm wondering, when I attempt to install F!!, is my current raid set-up causing problems? Do I need to get rid of the intel raid software and use a Fedor/Linux raid program to manage the raid array??
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I have two hard drives inside an Ubuntu 9.10 machine, each with a data partition on. What I want to do is as I save files to one data partition partition (or delete them) I want the files copying/deleted automatically on the the other data partition. I do not want to start using RAID or end-of-day incremental backups.
How do I recreate an LVM raid 1 partition, without destroying data on the discs? I have a 650GB data partition which is a raid 1 array with ext3. Two days ago, the system (Ubuntu 9.04) started to refuse to write to it, claiming no space left on device - even if there is ca. 102GB free left, if the disk is 85% full (according to df)! Interestingly, removing a couple of GB did not help, after reboot the disk was again full..
I did the "tune2fs -m 0" trick and then forced file check on next reboot by "sudo touch /forcefsck" .. and the result is that the raid partition is gone. I have the two physical drives /dev/sd*, unmounted, but the /dev/md1 is no longer there. What can I do to re-create it, without losing the data? I realized that I ran tune2fs on the physical partitions /dev/sd* - was I supposed to run it on /dev/md1?
My data are in /dev/sda5 (ext3). There's an empty partition /dev/sdb5 (ext3). I want /dev/sdb5 to be a mirror image of /dev/sda5, the command to invoke should be :
After an install of suse 11.4, one of my drives raid 0 (ichr9 intel) does not mount and is not recognized as being formatted in NTSF, while the other unit raid 0 (ichr9) is recognized without problems?
The ext4 file system creation in partition #1 of Serial ATA RAID isw_fjidifbhi_Volume0 (mirror) failed. Not sure what's wrong? How do I solve this? This happens after I enter my login details and click next. Then it fails on the install screen with this error.
Been attempting to install ubuntu on my Intel fake raid on a new PC.The installer sees the raid, but errors out:Quote:The ext4 file system creation in partition #1 of Serial ATA RAID isw_ehbgbddej_Volume0 (mirror) failed.This seem to be the EXACT issue described in this bug report:I read all the way down this bug report, and at the bottom it says:Quote:Launchpad Janitor on 2010-06-25Changed in parted (Ubuntu Lucid):status: Fix Committed → Fix ReleasedA few weeks ago, a fix was "committed" and "released". I had been working with an earlier 10.04 ISO, so I downloaded 10.04 from the web site today. Still the same problem.Maybe I'm just reading the bug report wrong? If it says the fix is committed and released, I would naively assume that it is fixed. Do I need to patch to apply the fix or something? Doesn't seem likely.
I am unable to hibernate my computer while using Ubuntu and I figured out the reason--Ubuntu is not using my swap partition. I would follow the existing tutorials on setting up a swap partition after installing Ubuntu, but since the volume uses hardware RAID 0, the swap partition is not assigned a /dev/ entry (like /dev/sdxx) and I am not sure how I can mount it.
I am trying to grow my array to make full use of my drives. I have a raid 6 using mdadm on my home server. The array used to consist of 6 1.5TB drives when it was created. Since then I've been replacing the 1.5TB drives with new 2TB drives. And now I have replaced the last 2 drives.
When I added the first 3 disks I did not use the whole disk for the raid partition but rather made it the same size as the 1.5 partitions. As it turns out this may have been a bad idea. (But it gave me another 3 partitions of 500GB that I turned into 1 disk using mhddfs.)
Now I'm trying to grow the array. I've been testing in a virtual enviroment on how to do it but I cannot find another method than this :
1) fail 1 disk. 2) re-partition the disk with the size of the whole disk. 3) re-add the drive as a spare. 4) start the now degraded array to let it resync. 5) wait quite a while. (aprox 5 hours) 6) start again from step 1 for the other disks. 7) use mdadm with grow command to enlarge the array use resize2fs to fill array to max size
Now although since this is a raid 6, I keep some redundancy but I still worry about degrading the array so manny times and rebuilding the whole thing. I mean I read the thing out 3 or 4 times over doing this.
EFI GUID Partition support works on both 32bit and 64bit platforms. You must include GPT support in kernel in order to use GPT. If you don't include GPT support in Linux kernel, after rebooting the server, the file system will no longer be mountable or the GPT table will get corrupted. By default Redhat Enterprise Linux / CentOS comes with GPT kernel support. However, *******if you are using Debian or Ubuntu Linux,******** you need to recompile the kernel. Set CONFIG_EFI_PARTITION to y to compile this feature.
Is this true? Ubuntu has no GPT support native to the server install? Never compiled a kernel. Is that of itself going to be a mind bender? How much doo doo am I going to get into if I haven't done it a few times? Trust me when I tell you its no thrill formatting and reformatting a 8tb raid drive or the individual disks if it screws up. Been on that ride way too long already. Need things to go smoothly. This is not a play toy server, but will be used in a business.
The RAID level 1 interested me because of its redundancy in both drives. And I successfully made it in a couple of partitions. But, I always did it after Linux installation. Then, I create both partitions, use 'mdadm' to create raidtab and RAID device (md0, for example) and then I format the RAID device with 'mkfs' and mount it.
Until there, it's all OK.
But my problem is to mirror ALL the hard disk, inclusive root partition. To do that, I guess I need no Linux installation, then create the RAID (md0, raidtab, etc) and after that install Linux in RAID device created.
But I'm new in Linux world and I have no idea how to do that.
I use Debian Lenny, so I need a solution that uses only the first DVD of this distribution.
This might be a longshot but i used to use the 4 bay NAS here Which formated the HDD JBOD array to EXT2 , Anyway i want to move to an 8 bay DAS now but i cant find an application to convert the partitions .
Partition magic says the partitions are unformatted and the partition id is set to linux raid auto . Is there any way to convert this partition to vanilla EXT2 so i can use partition magic on it ? Its a 4TB array so copying isnt possible.
I'm building a NAS, based on the Intel SS4200. There are 4 drive bays in the machine for use with SATA disks, two of which I plan on filling now, the other two which I plan on filling later. The box also includes an IDE connector to which I will connect an 8GB Disk on Module onto which I will install Slackware. I wish to have all drives in the box show up as one contiguous volume. What partitioning/LVM/RAID configuration can I use which will allow me to:
1. Add a disk and transparently grow the available space of the volume? 2. Replace a disk with a larger disk and transparently grow the available space of the volume? 3. Lose a disk to hardware failure and replace it with a new one with no data loss?
If I use RAID 5, I'm pretty certain I can get numbers 1 and 3 above, but I'm not sure about number 2. The downside is that I'd have to start with 3 disks in the machine, and I'm unsure if adding a 4th disk whose size is larger than each of the 3 starting disks would lead to wasted space. For instance, if I start with three 1TB drives in RAID 5, and then add a 2TB 4th drive, would my available size go from 2TB to 3TB? Or from 2TB to 3.xTB?
Is it important in a RAID 5 setup to have all disks the same size? With LVM, I can certainly get number 1 above, but what about 2 and 3? I know you can use LVM to present many disks or partitions as one contiguous volume, but if I have two 1TB drives in one volume, and only have 300GB of data, then would the second drive remain empty until I broke the 1TB barrier? In this case, it's wasted space from the get go. I suppose another option would be to start with RAID 1 until I can afford a third disk.
When adding the new disk, could I switch to RAID 5 without data loss? I'm planning on maintaining a full mirror of the NAS on some USB disks as a backup, so if configuration changes to the NAS require wiping the disks and restoring from backup, it's not a total loss. However, it certainly makes me nervous to be in a state where only one copy of the data exists, so I'd rather find a solution where I can add and upgrade disks in the NAS without relying on the backup copies.
I have installed Ubuntu 10.04 LTS on machine but wanted to choose RAID and LVM during partition scheme. Unfortunately I couldn't find such option during partitioning. Is it true that for Ubuntu 10.04 LTS it is unable to use RAID and LVM ?
I have recently installed Debian on my NAS server. I have also configured Samba for sharing the home directory of a nas user i.e. /home/nas To this directory I have read/write from a windows machine using the nas user credentials. When I mount my RAID partition /dev/md0p1 to the /home/nas directory, I then realize that all content in this directory (files and subfolders) is only owned by the root user. When trying to access from the windows machine the /home/nas directory, I do not have any write access, only read. I have tried both the nas and the root user credentials.
I have also attempted the change the ownership of the mounted RAID partition to the nas user with the -R recursive option, but I get for the internal files/subfolders an error "operation not supported".
How can I overcome this problem? - Is there something not done properly in the /dev/md0 array definition (i.e. ARRAY /dev/md0 level=raid1 num-devices=2 UUID=bddf8b69:c97967b5:cb104784:7fef7cc3 )?- Is there something not done properly in the /dev/md0p1 mounting (i.e. mount /dev/md0p1 /home/nas)?- Should I do any extra configuration before the mounting etc? I would really appreciate any kind of help I could get.
Some background info
b) After OS boot, when I do a: # cat /proc/mdstat, I get: Personalities : [raid1] md0 : active (auto-read-only) raid1 sda1[0] sdb1[1] 4200896 blocks unused devices: <none>
concerning Linux, mdadm, and creating RAID Array's in Debian. I've done a lot of reading and research on RAID both on this board and elsewhere (The Linux Documentation Project's Software-RAID HOWTO is especially good), but I've run across something that no one seems to explain, and I'm not sure why. I'm instructed to create partitions on the drives I wish to add to my array. These partitions inevitably take up the whole disk, and are always have their system IDs set to "Linux raid autodetect". What I don't understand is why, after creating these partitions, some guides then go on to create an array (say a RAID5 one) with just the disks themselves as members, while others go on to create the RAID5 array with the previously created partitions as members. E.g.,
I am finally, and happily ditching Windows IIS, SQL Server, and ASP in favor of LAMP. Not only will I save a bunch of money on operating systems but I've found php and MySQL development to be much faster than their Microsoft counterparts.Currently I have two W2008 and two Ubuntu servers running and doing virtually parallel tasks. I want to can the W2008 machines but I am not 100% sure of my Ubuntu mirrors.Everything seems to be working fine. I've copied tons of data back and forth as a primitive test but sometimes things work fine for all the wrong reasons. Here's where I get confused.
Question 1:Do I need to partition the RAID device (MD0) and then format it?From my experience this is necessary to get the device to mount.
Question 2:In this case was it also necessary to format the individual drive partitions?
Question 3:If I do a daily cat /proc/mdstat is this all I need to do to check the drive status.
Question 4:Is there any other check I can do to assure that the mirrors are created, mounted, and operating correctly?
I have been working in my spare time over the last few days, Had problems with the whole grub2 not installing on RAID 1. Then I found out that I had to have a separate non raid Ext4 partition on one drive with "/Boot" as mount point and install went though fine.
My question is, will the developers FIX this. I mean if I only have 2 x 2tb drives in teh pc as RAID 1, and the one fails with the boot partition the whole machine goes down. Kinda defeats the whole RAID feature huh.
I USED to run a windows network with RAID 5 servers, and windows never had a problem installing everything on the RAID. Or even setting up RAID 1 in the bios on the PC and installing XP or NT4 on the RAID 1. There was never a need for a separate non raid boot partition.
Do we need a separate RAID friendly/enabled Grub? I guess for now I will get a 250gb drive and install the Server OS to it, and setup RAID1 manually after the server OS boots. Then will make a ghost type image of the server in case that drive fails so I can quickly install a new drive and restore the image and get it back up and running again.
I've finally found a couple of useful tutorials on setting up RAID in Linux. However, because this is new ground to me, I have a couple of basic questions which I think the tutorial writers gloss over because of their familiarity with the process. My questions are these:
1. Most tutorials speak about setting up only one partition on clean drives. Can I set up more than one (e.g. / and home) to be mirrored as two partitions?
2. When starting with two identical clean drives, do I need to set up my partitions identically on both drives or is it only the partitions that I want mirrored to the second drive?
I found a workaround of sorts. It looks like this is related to a 9.04 bug [URL] and the loopback workaround brings back the array. It is not clear how I will handle this long term.
Note: before using this technique, I used gparted to tag the partitions as "raid". They disappeared again on reboot, so I had to do it again. I am not sure how this is going to work out long-term
Note: I suspect some of this is related to the embedded "HOMEHOST" that is written into the RAID metadata on the paritions. The server was misnamed when first built and the name was changed later (cerebus -> cerberus) and the old name has surfaced in the name of a phanton device reported by gparted - /dev/mapper/jmicron_cerebus_root
I have a mythbuntu 9.10 system that I have upgraded from 8.10 to 9.04 to 9.10 in the last 2 days. I am on my way to 10.x, but need to make sure it works after every step.
The basic problem is that in its current incarnation, it is not recognizing the underlying partitions for one of the RAID devices, and therefore not happy.
As a 8.10 system I had 2 raid devices:
/dev/md16 -> /dev/sda5 and /dev/sdb5 /dev/md21 -> /dev/sdc1 and /dev/sdd1 /etc/fstab looked like this (in part): /dev/md16 /var/lib xfs defaults 0 2
[Code]....
I don't *really* want to repartition the drive as there is a small amount of data loss between recent backups and what is on the drive, plus it would take me 2 days to move the data back.
I wanted to implement raid5 such that one partition is from my laptop's hard disk and others from other hard disks. After making one partition a raid partition, I rebooted the system. The computer stopped mid-way during booting, and brought me to the shell. On typing fsck -p, it told me an unexpected error occured in the partition which I had made for raid. Is there some condition that we cannot boot from a disk containing one of the raid partitions ?
I have just configure RAID 1 on my IBM X3400 Server in CENTOS 5 ..Partition information is md0 boot , md1 swap and md2 is root ....but after resyn when i run cat /proc/mdstat I realize that md0 and md1 is ok and present with [UU] status. BUT the md2 is showing on one [U_].. that means my root partition is not properly in RAID 1.. how can i make it active in both drive. Or i need to reinstall complete system again. Screen shot attached [URL]
I've installed debian squeezy recently and for some reason I have problems with mounting /home partition during startup.
There's an error:Mounting local filesystems...mount: special device /dev/mapper/isw_bbfedcffgi_Volume0p6 does not exist. failed
I've tried using fsck - no result the file system is healthy, I've tried formatting it once again (fresh copy, no user data) and it's not working. What is more mounting the partition manually goes well - I can read the data and write to it. All other partitions are ok.
I have no idea what's going on and why mounting /home fails. I've written this post on Polish debian users forum, but no response - only to give more info, so I'll put it here also:
ls -al /dev/mapper
crw------- 1 root root 10, 59 Nov 9 19:34 control lrwxrwxrwx 1 root root 7 Nov 9 19:34 isw_bbfedcffgi_Volume0 -> ../dm-0 lrwxrwxrwx 1 root root 7 Nov 9 19:34 isw_bbfedcffgi_Volume01 -> ../dm-2 lrwxrwxrwx 1 root root 7 Nov 9 19:34 isw_bbfedcffgi_Volume05 -> ../dm-3 lrwxrwxrwx 1 root root 7 Nov 9 19:34 isw_bbfedcffgi_Volume06 -> ../dm-4 code....