General :: Raid Devices Not Creating Arrays / Solution For This?
Mar 30, 2010
I have created software raid 5 configurations on the second harddrive its working fine and i have edited fstab file for auto mounting when it reboot but when i reboot the computer raid doesn't work i have to re-create the arrays by typing "mdadm --create" command again and mount again manually ,is there anywhere i can do this once without retyping the commands again after rebooting and i am also using redhat 5
View 1 Replies
ADVERTISEMENT
Dec 7, 2010
I'm working on a server and noticed that the to RAID5 setup is showing 4 Raid devices but only 3 Total devices. It's on a fully updated CentOS 5 system that only has three SATA drives, as it can not hold anymore. I've done some researching but am unable to remove the fourth device, which is listed as removed. The full output of `mdadm -D /dev/md2` can be see below. I've never run into this situation before.Anyone have any pointers on how I can reduced the Raid Devices from 4 to 3? I have tried
mdadm /dev/md2 -r failed
mdadm /dev/md2 -r detached
but neither work and since there is no block device listed I'm not quite sure how to get things back in sync so it's only seeing the three drives.
/dev/md2:
Version : 0.90
Creation Time : Tue May 25 11:07:04 2010
Raid Level : raid5
[code]....
View 8 Replies
View Related
Jul 6, 2010
So I have a system that is about 6 years old running Redhat 7.2 that is supporting a very old app that cannot be replaced at the moment. The jbod has 7 Raid1 arrays in it, 6 of which are for database storage and another for the OS storage. We've recently run into some bad slowdowns and drive failures causing nearly a week in downtime. Apparently none of the people involved, including the so-called hardware experts could really shed any light on the matter. Out of curiosity I ran iostat one day for a while and saw numbers similar to below:
[Code]...
Some of these kinda weird me out, especially the disk utilization and the corresponding low data transfer. I'm not a disk IO expert so if there are any gurus out there willing to help explain what it is I'm seeing here. As a side note, the system is back up and running it just runs sluggish and neither the database folks nor the hardware guys can make heads or tails of it. Ive sent them the same graphs from iostat but so far no response.
View 1 Replies
View Related
Jun 10, 2011
With mdadm was only able to add a new drive to the using the --force function. I do not feel comfortable with using the function that way though. When I remove a disk in VMWare, it perfectly says that the drive is lost and the array is degraded (mdadm --detail /dev/md0). Although after re-adding the drive, it immediately shows device as busy for both mdadm and sfdisk when I don't use --force.
Recovery and repairation of degraded array worked fine with sfdisk --force, mdadm --add --force, it automatically started recovering and took not so long. What are best practises to manage software raid-1 arrays?
View 1 Replies
View Related
Apr 15, 2011
Have a customer who is due for a new system. AS they just renewed their RHEL entitlement, they plan on ordering a Dell server without a OS preload. Two questions:
- Will RH let them download RHEL6 just by maintaining the entitlement when their current version is RHEL 3?
- The server will have two RAID arrays - one intended for /home, one for "everything else". As I've never done a clean load with two arrays, how do I select what file systems go on which array?
View 3 Replies
View Related
Jan 3, 2010
I am in a situation where I am stuck with a LVM cleanup process. Although I know a lot about AIX LVM , but this is first time I am working with Linux LVM2. Problem is that I created two RAID arrays on storage, which appeared as mpath0 & mpath1 devices (multipath) on RHEL. I created logical volumes and volume groups and every thing was fine till I decided to clean the storage arrays and ran following script:
#!/bin/sh
cat /scripts/numbers | while read numbers
do
lvremove -f /dev/vg$numbers/lv_vg$numbers
vgremove -f vg$numbers
pvremove -f /dev/mapper/mpath$numbersp1
done
Please note that numbers was a file in same directory, having numbers 1 and 2 in separate line. Scripts worked well and i was able to delete definitions properly (however I now think I missed one parted command to remove the partition definition from mpath device. When I created three new arrays, I got devices from mpath2 to mpath5 on linux and then I created vg0 to vg2. By mistake, I ran above script again for cleanup purpose and now I got following error message
Cant remove physical volume /dev/mapper/mpath2p1 of volume group vg0 without ff[/B]
Now after doing mind search, I now realize that I have messed up (particularly because mpath devices did not map in sequence to vg devices and mapping was like mpath2 --- to ---- vg0 and onwards). Now how I can cleanup the lvm definitions? should i go for pvremove -ff flag or investigate further? I am not concerned about data, I just want to cleanup these pv/vg/lv/mpath definations so that lvm can be cleaned up properly and I can start over with new raid arrays from storage?
View 1 Replies
View Related
Jan 9, 2011
I'm trying to setup a RAID 5 array of 3x2TB drives and noticed that, besides having a faulty drive listed, I keep getting what looks like two separate arrays defined. I've setup the array using the following :
sudo mdadm --create /dev/md01 --verbose --chunk=64 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sde
So I've defined it as md01, or so I think. However, looking in the Disk Utility the array is listed as md1 (degraded) instead. Sure enough I get :cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid5 sde[3](F) sdc[1] sdb[0]
3907028992 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
So I tried getting info from mdadm on both md01 and md1 :user@al9000:~$ sudo mdadm --detail /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Sun Jan 9 10:51:21 2011
Raid Level : raid5 ......
Is this normal? I've tried using mdadm to --stop then --remove both arrays and then start from scratch but I end up in the same place. I'm just getting my feet wet with this so perhaps I'm missing some fundamentals here. I think the drive fault is a separate issue, strange since the Disk Utility says the drive is healthy and I'm running the self test now. Perhaps a bad cable is my next check...
View 3 Replies
View Related
Jun 21, 2011
what do I have:2x 150GB drives (sda) on a raid card (raid 1)for the OS (slack 13.37)2x 2TB drives (sdb) on that same raid card (raid 1, too)2x 1.5TB drives (sdc,sdd) directly attached to MoBo2x 750GB drives (sde,sdf) attached to MoBo too.if i got about it the normal way, i'd create softRAID 1 out of the the 1.5TB and the 750GB drives and LVM all the data arrays (2TB+1.5TB+750GB) to get a unified disk.If I use btrfs will I be able to do the same? mean I have read how to create raid arrays with mkfs.btrfs and that some lvm capability is incorporated in the filesystem. but will it understand what I want it to do, if i just say
Code:
mkfs.btrfs /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
probably not, eh?
View 3 Replies
View Related
Jun 5, 2010
I have never preformed a rebuild of an RAID array. I am collecting resources, which details how to build an RAID 5 array when one drive has failed. Does the BIOS on the RAID controller card start to rebuild the data on the new drive once it is installed?
View 4 Replies
View Related
Mar 30, 2010
i want to remove the raid 1 arrays on our server centos and use standalone drive
View 3 Replies
View Related
Jun 10, 2011
With mdadm was only able to add a new drive to the using the --force function. I do not feel comfortable with using the function that way though.
When I remove a disk in VMWare, it perfectly says that the drive is lost and the array is degraded (mdadm --detail /dev/md0). Although after re-adding the drive, it immediately shows device as busy for both mdadm and sfdisk when I don't use --force.
Recovery and repairation of degraded array worked fine with sfdisk --force, mdadm --add --force, it automatically started recovering and took not so long.
What are best practises to manage software raid-1 arrays?
View 1 Replies
View Related
May 13, 2010
I have two SAS RAID controller cards in a Dell server in slots 2 & 3, both with an array hanging off them. I went to install a third card into slot 1, but then when it boots it says two of my sd's have bad magic number in the super-block and it wants me to create an alternative one, which I don't want to do. If i remove the new card, the server boots perfectly like it did before I added the new card. Is the new card trying to control stuff that isn't hooked up to it because its in slot 1, so its confusing RHEL?
View 5 Replies
View Related
May 23, 2010
I've got a Gentoo box that I'm interested in switching over to an Ubuntu box.
I currently have the partitions laid out using a mixture of RAID (mdadm) and LVM2, as specified in this document [1].
Ideally I'd like to just wipe out the non /home partition, as it's got data I'd like to keep.
Is it possible to reuse the current setup, or do I need to restart? vgdisplay, vgchange -a y, etc don't yield any results from the Ubuntu LiveCD, and I'm wary to run any commands that might wipe my data.
[1] [url]
View 1 Replies
View Related
Mar 19, 2011
I just tried creating a raid Device which has both stripping and mirroring i have done as below Quote:
mdadm -C /dev/md00 -l1 -n2 /dev/sda7 /dev/sda8
mdadm -C /dev/md01 -l1 -n2 /dev/sda9 /dev/sda10
mdadm -C /dev/md02 -l0 -n2 /dev/md00 /dev/md01
pvcreate /dev/md02
vgcreate volgroup02 /dev/md02
lvcreate -n orac -L 9G volgroup02
[Code]...
Everything is fine until here but after reboot the device wont mount on /orac it says special device not available i found that that md02 device is not in active state
i tried deleting it and recreating it but no use still it wont persist a reboot
View 1 Replies
View Related
Apr 7, 2011
I am trying to create a Raid 1 ram disk. Below are the commands I used:
[root@abidbodal dev]# mke2fs -m 0 /dev/ram8
[root@abidbodal dev]# mount /dev/ram8 /mnt/rd8
[root@abidbodal dev]# mke2fs -m 0 /dev/ram9
[code]....
View 3 Replies
View Related
Mar 23, 2011
I've found that Ubuntu doesn't properly recognize disks attached to a Promise FastTrak TX4310 RAID5 card. I have a possible solution, but I could sure use some feedback from people more experienced with 'nix and drivers. Please advise?
Promise has TX4310 drivers for SUSE and RHEL4 (note: I've never heard of that one before), but their support person said that their open source drivers should be ok to compile with any version of Linux based on the 2.6 core. Therefore, I'm wondering if these open source drivers could simply be recompiled for Ubuntu and used?
Please excuse my ignorance with this as I'm new to 'nix having 30 yrs experience only with big iron and Windows. I'm ramping-up quickly, but haven't been down the path of compiling drivers yet, not like it'd stop me.
Does this idea make sense?
Has anyone tried it?
Please help if you can as I'd really hate to go back to Windows box simple because of a lack of Promise driver support.
View 1 Replies
View Related
Mar 3, 2011
I found a workaround of sorts. It looks like this is related to a 9.04 bug [URL] and the loopback workaround brings back the array. It is not clear how I will handle this long term.
Note: before using this technique, I used gparted to tag the partitions as "raid". They disappeared again on reboot, so I had to do it again. I am not sure how this is going to work out long-term
Note: I suspect some of this is related to the embedded "HOMEHOST" that is written into the RAID metadata on the paritions. The server was misnamed when first built and the name was changed later (cerebus -> cerberus) and the old name has surfaced in the name of a phanton device reported by gparted - /dev/mapper/jmicron_cerebus_root
I have a mythbuntu 9.10 system that I have upgraded from 8.10 to 9.04 to 9.10 in the last 2 days. I am on my way to 10.x, but need to make sure it works after every step.
The basic problem is that in its current incarnation, it is not recognizing the underlying partitions for one of the RAID devices, and therefore not happy.
As a 8.10 system I had 2 raid devices:
/dev/md16 -> /dev/sda5 and /dev/sdb5
/dev/md21 -> /dev/sdc1 and /dev/sdd1
/etc/fstab looked like this (in part):
/dev/md16 /var/lib xfs defaults 0 2
[Code]....
I don't *really* want to repartition the drive as there is a small amount of data loss between recent backups and what is on the drive, plus it would take me 2 days to move the data back.
View 2 Replies
View Related
Apr 13, 2010
I installed Fedora 12 and performed the normal updates. Now I can't reboot and get the following console error message.
ERROR: via: wrong # of devices in RAID set "via_cbcff jdief" [1/2] on /dev/sda
ERROR: removing inconsistent RAID set "via_cbcff jdief"
ERROR: no RAID set found
No root device found
Boot has failed, sleeping forever.
View 14 Replies
View Related
Aug 31, 2010
I have been having this problem for the past couple days and have done my best to solve it, but to no avail. I am using mdadm, which I'm not the most experienced in, to make a raid5 array using three separate disks (dev/sda, dev/sdc, dev/sdd). For some reason not all three drives are being assembled at boot, but I can add the missing array without any problems later, its just that this takes hours to sync. Here is some information:
[Code]....
View 11 Replies
View Related
Apr 7, 2010
I have installed ubuntustudio 9.10 on my dell dimension 1100 desktop and im trying to setup raid-1 because i'm constantly worried that my hard disk is going to fail. i have 2 drives. one 40gb and one 80gb. so, i created a 40gb partition on my 80gb drive and i want to raid this partition with the 40gb drive. is this possible? and am i right in thinking that i can raid everything including /boot?
View 6 Replies
View Related
Jun 27, 2010
I have recently installed a Asus M4A77TD Pro system board which supports raid.
I have 2 x 320gb sata drives I would like to setup raid-1 on. so far i have configured the bios to raid-1 for drives, but when installing Ubuntu 10.04 from the cd it detects the raid configuration but fails to format.
When I re-set all bios settings to standard sata drives ubuntu installs and works as normal but i have just 2 x drives without any raid options. I had this working in my previous setup but thats because i had the o/s on a sepreate drive from the raid and was able to do this within Ubuntu.
View 3 Replies
View Related
Apr 24, 2009
I have a system that has the following partitions:
Now SDC is a new drive I added. I would like to pool that new drive with the raided drives to give myself more space on my existing system (and structure). Is this possible since my raid already has data on it?
View 1 Replies
View Related
Mar 2, 2011
a server that was running a hardware isw raid on the system (root) disk. This was working just fine until I started getting sector errors on one of the disks. So, I shutdown the system and removed the failing drive and installed a new drive (same size). On reboot I went in to the intel raid setup and it did show the new drive and I was able to set it to rebuild the raid. So, continuing the reboot everything came up just fine except the raid 1 on the system disk. I have tried many times to get the system to rebuild the raid using dmraid, but to no avail it would not start a rebuild. In order to get the system back up and make sure that the disk was duplicated I was able to 'dd' the working disk to the new disk that was installed.At present when I look at the system it does not show up with a raid setup on the system disk ( this comprises the entire 1TB disk with w partitions sda1 as / and sda2 as swap).Problem:I have decided to forego the intel raid and just use mdadm. I have a test system setup to duplicate (not the software, but the disk partitions) the server setup.
Code:
[root@kilchis etc]# fdisk -l
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
View 12 Replies
View Related
Apr 17, 2010
The intention is to have this system dual-boot. When i first put it together, i decided to setup a raid5 array spanning 3 sata drives. I installed Windows 7 first, decided i'd get to Linux later. I left 150mb or so at the beginning of the array for /boot, and about 200gb at the end for my linux install. i'm getting to the linux install. My distro of choice is Fedora 12. I start the setup, and at the point where it's time to partition, the installer tells me that its unable to find any suitable storage devices.
I Crtl-Alt-F2 to a console, and fdisk -l. Fdisk reports three individual drives which all have partitions already. All have free space. None make sense. So i turned to google, and found some threads which explain that this chip doesn't run a true raid, rather its what's been referred to as fake raid. Which is that it depends on the windows driver in order to actually present the array to the OS, and that the best way to get by that on linux, is to break the array, and use LVM instead.
That's all well and good, but i lose two things in doing that. First i lose the resiliency of raid 5, and second, well, what does that do to my windows install? I've considered moving all of my data from windows to other machines, and then just starting from scratch, but i'd really much prefer a method of using the chips fake raid in linux. Is there a driver, or module which i can install to make this happen?
View 3 Replies
View Related
Mar 24, 2011
It's been a while since I configured a raid and have been making some changes to my main workstation/server.
fdisk does not like md devices on my machine... always says it has an invalid partition table. While this is said to be normal all over the net, I don't feel warm and fuzzy about that fact. What is best practice these days, to create a non-partitionable md device or a partitionable mdp device?
If I create a partitionable md device, I would imagine it would look good in fdisk. However, I am concerned about growing the array afterward. I would then have to grow the array, redefine the partition, and then grow the file system. The PITA factor goes up. Has anyone worked with both? Pro/Cons? My array was created with:
mdadm --create --verbose /dev/md0 --level=5 --force --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1
FYI: I have backups. I understand RAID 1 may be a better choice of raid level.
View 3 Replies
View Related
Oct 27, 2010
We have some servers that run in very harsh environments (research vessel) that need to have high-availability.We have software RAID 1 for some measure of resiliency, along with proper data backups (tapes etc), however we would like to be able to break out a new server and re-image it (including RAID setup) from a known good copy if the hardware completely fails on the production box. Simplicity of the process is a big plus.I am interested in any advice on the best way to approach this. My current approach (relatively new to Linux administration, totally new to MDADM) is to use DD to take a complete gzipped copy of one of the RAID'ed devices (from a live CD): ode:
dd if=/dev/sda bs=4096 | gzip -c > /mnt/external/image/test.img then reverse the process on the new PC, finally using Code:mdadm --assemble to re-create and re-build the array.
View 1 Replies
View Related
Jun 2, 2010
Is there a way to compare an array in a while conditions?
I have one array that contains the results of some search and if the script has found all the items, then it should stop, so my idea is to have a while loop � la:
Code:
View 4 Replies
View Related
Mar 11, 2011
If I umount both of them, can I run an e2fsck on each at the same time through 2 putty sessions, or will that not really gain me anything from doing them one after another?
View 3 Replies
View Related
Sep 30, 2010
I am writing a script to get the multiples of 2 and 3, place them in an 2 arrays, and then show the common integers. So far everything works fine till the comparision. I don't know how to compare them. Here is the code:
Code:
#!/bin/bash
let num1="2"
let num2="3"
[code]...
View 6 Replies
View Related
Oct 16, 2010
I'm setting up a raid 5 on several hard disks with a layer of lvm on top for good measure.I know the recent kernels support growing software raid, but since centos runs 2.6.18, I wanted to make sure it'll work. Does the centos kernel support growing raid devices?
View 1 Replies
View Related