CentOS 5 :: Remove The Raid 1 Arrays On Server And Use Standalone Drive?

Mar 30, 2010

i want to remove the raid 1 arrays on our server centos and use standalone drive

View 3 Replies


CentOS 5 Server :: Best Practises For Managing Software Raid Arrays?

Jun 10, 2011

With mdadm was only able to add a new drive to the using the --force function. I do not feel comfortable with using the function that way though.

When I remove a disk in VMWare, it perfectly says that the drive is lost and the array is degraded (mdadm --detail /dev/md0). Although after re-adding the drive, it immediately shows device as busy for both mdadm and sfdisk when I don't use --force.

Recovery and repairation of degraded array worked fine with sfdisk --force, mdadm --add --force, it automatically started recovering and took not so long.

What are best practises to manage software raid-1 arrays?

View 1 Replies View Related

Ubuntu Servers :: RAID Arrays Rebuild The Data On The New Drive?

Jun 5, 2010

I have never preformed a rebuild of an RAID array. I am collecting resources, which details how to build an RAID 5 array when one drive has failed. Does the BIOS on the RAID controller card start to rebuild the data on the new drive once it is installed?

View 4 Replies View Related

CentOS 5 Server :: Remove/Disable Software Raid Without Formatting?

Nov 8, 2010

I am currently running CentOS 5.5 on a software raid 1 setup.

I would like to remove the raid configuration and have it run without raid.

How would one go ahead and do this?

View 1 Replies View Related

Server :: How Long Does Hardware Raid Card (raid 1) Take To Mirror 1 TB Drive (500gb Used)

Mar 22, 2011

How long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?

View 1 Replies View Related

Red Hat / Fedora :: Managing Software Raid Arrays?

Jun 10, 2011

With mdadm was only able to add a new drive to the using the --force function. I do not feel comfortable with using the function that way though. When I remove a disk in VMWare, it perfectly says that the drive is lost and the array is degraded (mdadm --detail /dev/md0). Although after re-adding the drive, it immediately shows device as busy for both mdadm and sfdisk when I don't use --force.

Recovery and repairation of degraded array worked fine with sfdisk --force, mdadm --add --force, it automatically started recovering and took not so long. What are best practises to manage software raid-1 arrays?

View 1 Replies View Related

Red Hat :: Loading RHEL 6 On Machine With Two RAID Arrays

Apr 15, 2011

Have a customer who is due for a new system. AS they just renewed their RHEL entitlement, they plan on ordering a Dell server without a OS preload. Two questions:

- Will RH let them download RHEL6 just by maintaining the entitlement when their current version is RHEL 3?

- The server will have two RAID arrays - one intended for /home, one for "everything else". As I've never done a clean load with two arrays, how do I select what file systems go on which array?

View 3 Replies View Related

Red Hat :: LVM Cleanup Process - Two RAID Arrays On Storage

Jan 3, 2010

I am in a situation where I am stuck with a LVM cleanup process. Although I know a lot about AIX LVM , but this is first time I am working with Linux LVM2. Problem is that I created two RAID arrays on storage, which appeared as mpath0 & mpath1 devices (multipath) on RHEL. I created logical volumes and volume groups and every thing was fine till I decided to clean the storage arrays and ran following script:

cat /scripts/numbers | while read numbers
lvremove -f /dev/vg$numbers/lv_vg$numbers
vgremove -f vg$numbers
pvremove -f /dev/mapper/mpath$numbersp1

Please note that numbers was a file in same directory, having numbers 1 and 2 in separate line. Scripts worked well and i was able to delete definitions properly (however I now think I missed one parted command to remove the partition definition from mpath device. When I created three new arrays, I got devices from mpath2 to mpath5 on linux and then I created vg0 to vg2. By mistake, I ran above script again for cleanup purpose and now I got following error message

Cant remove physical volume /dev/mapper/mpath2p1 of volume group vg0 without ff[/B]

Now after doing mind search, I now realize that I have messed up (particularly because mpath devices did not map in sequence to vg devices and mapping was like mpath2 --- to ---- vg0 and onwards). Now how I can cleanup the lvm definitions? should i go for pvremove -ff flag or investigate further? I am not concerned about data, I just want to cleanup these pv/vg/lv/mpath definations so that lvm can be cleaned up properly and I can start over with new raid arrays from storage?

View 1 Replies View Related

Debian :: Reverting RAID 1 - Mount Partition As Standalone Encrypted Disk

Feb 11, 2011

I have 2 identical disks originally configured as a pair for a server. Each of the disks has 2 partitions dev/sdb1,dev/sdb2. The sdb1 partitions I had configured as a raid1 mirror. The sdb2 partitions were non-raid and used as extra misc. Space. Further, the raid setup is also encrypted using dm-crypt luks. Now I want to redeploy each of the disks for new purposes. One of the disks i want to deploy exactly as before (keeping the partitions and content), however without being part of a raid array.

I've successfully deployed this disk into a new system and I am mounting the dev/sdb1 partition as dev/md0 because the disk is set to autodetect raid. Actually I am using cryptsetup and mounting with mapper. Can I get rid of the setting for auto detect on this partition without losing the data, or breaking the encryption? I just want to mount the partition as a standalone encrypted disk. Is it as simple as doing crypt setup luksOpen /dev/sdb1 then mounting it with mapper? Or do I need to change the partition in some way. Or do I simply continue to operate it as a 'broken' raid array?

View 2 Replies View Related

Ubuntu :: RAID 5 Setup - Two Separate Arrays Defined

Jan 9, 2011

I'm trying to setup a RAID 5 array of 3x2TB drives and noticed that, besides having a faulty drive listed, I keep getting what looks like two separate arrays defined. I've setup the array using the following :
sudo mdadm --create /dev/md01 --verbose --chunk=64 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sde

So I've defined it as md01, or so I think. However, looking in the Disk Utility the array is listed as md1 (degraded) instead. Sure enough I get :cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid5 sde[3](F) sdc[1] sdb[0]
3907028992 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]

So I tried getting info from mdadm on both md01 and md1 :user@al9000:~$ sudo mdadm --detail /dev/md1
Version : 00.90
Creation Time : Sun Jan 9 10:51:21 2011
Raid Level : raid5 ......

Is this normal? I've tried using mdadm to --stop then --remove both arrays and then start from scratch but I end up in the same place. I'm just getting my feet wet with this so perhaps I'm missing some fundamentals here. I think the drive fault is a separate issue, strange since the Disk Utility says the drive is healthy and I'm running the self test now. Perhaps a bad cable is my next check...

View 3 Replies View Related

General :: Raid Devices Not Creating Arrays / Solution For This?

Mar 30, 2010

I have created software raid 5 configurations on the second harddrive its working fine and i have edited fstab file for auto mounting when it reboot but when i reboot the computer raid doesn't work i have to re-create the arrays by typing "mdadm --create" command again and mount again manually ,is there anywhere i can do this once without retyping the commands again after rebooting and i am also using redhat 5

View 1 Replies View Related

Slackware :: How To Create Raid Arrays With Mkfs.btrfs

Jun 21, 2011

what do I have:2x 150GB drives (sda) on a raid card (raid 1)for the OS (slack 13.37)2x 2TB drives (sdb) on that same raid card (raid 1, too)2x 1.5TB drives (sdc,sdd) directly attached to MoBo2x 750GB drives (sde,sdf) attached to MoBo too.if i got about it the normal way, i'd create softRAID 1 out of the the 1.5TB and the 750GB drives and LVM all the data arrays (2TB+1.5TB+750GB) to get a unified disk.If I use btrfs will I be able to do the same? mean I have read how to create raid arrays with mkfs.btrfs and that some lvm capability is incorporated in the filesystem. but will it understand what I want it to do, if i just say

mkfs.btrfs /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
probably not, eh?

View 3 Replies View Related

CentOS 5 Hardware :: Connect A RAID Box To The Server Via LSI 8880EM2 RAID Controller

Aug 3, 2010

I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.

The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).

I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.

Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.

View 3 Replies View Related

Hardware :: Installing New RAID Card Causes Existing Arrays To Have Bad Magic Number

May 13, 2010

I have two SAS RAID controller cards in a Dell server in slots 2 & 3, both with an array hanging off them. I went to install a third card into slot 1, but then when it boots it says two of my sd's have bad magic number in the super-block and it wants me to create an alternative one, which I don't want to do. If i remove the new card, the server boots perfectly like it did before I added the new card. Is the new card trying to control stuff that isn't hooked up to it because its in slot 1, so its confusing RHEL?

View 5 Replies View Related

General :: Scsi RAID Jbod And Arrays - Disk Utilization And The Corresponding Low Data Transfer

Jul 6, 2010

So I have a system that is about 6 years old running Redhat 7.2 that is supporting a very old app that cannot be replaced at the moment. The jbod has 7 Raid1 arrays in it, 6 of which are for database storage and another for the OS storage. We've recently run into some bad slowdowns and drive failures causing nearly a week in downtime. Apparently none of the people involved, including the so-called hardware experts could really shed any light on the matter. Out of curiosity I ran iostat one day for a while and saw numbers similar to below:


Some of these kinda weird me out, especially the disk utilization and the corresponding low data transfer. I'm not a disk IO expert so if there are any gurus out there willing to help explain what it is I'm seeing here. As a side note, the system is back up and running it just runs sluggish and neither the database folks nor the hardware guys can make heads or tails of it. Ive sent them the same graphs from iostat but so far no response.

View 1 Replies View Related

CentOS 5 :: Install And Booting From Raid Drive?

Feb 28, 2009

I am attempting to set up an IBM Intellistation Z Pro to optionally boot from many OS versions. One of these is Windows XP. Windows XP is currently installed on a RAID 1 device consisting of a pair of 1 GB WD drives. These are SATA drives. The HBA I'm using is a SYBA SATA II card that uses Silicon Image's SIL3124-2 chip. The card has a BIOS and I've set up two 60 GB mirrors for Windows. I boot XP fine from this setup and things run good.

I've downloaded the CentOS 5.2 images and am able to start the installation process. I'm baffled on where to install the OS software on the RAID device. The installation process shows the two 1GB drives as separate drives and there is no acknowledgment of the 60 GB partition I've already created on them for Windows XP. I expected to see only the one logical RAID device and 60 GB of it in use for a NTFS partition. Reluctant to proceed further, I bailed on the install and am asking for advice on how to proceed.

The big question is

1) How can I install CentOS 5.2 on the RAID drive to coexist with a Windows XP installation? My desire is to boot either from this drive.

other questions related to this are:

2) Why does the CentOS partitioning software used for the install not display the logical RAID drive set up through BIOS (I'm assuming that this means it was set up apart from any Windows drivers etc.)? Only the two physical drives are displayed and there is no mention of any partitions in use.

View 3 Replies View Related

General :: How To Set Up LVM And Raid 1 With CentOS On 6 Hard Drive System?

Mar 12, 2010

I would like to setup a CentOS file server with LVM and Raid1. Having 6 x 500GB drives, 4 x 1GB Ram and a Quad Core Cpu, I am considering to configure 3 hdd as LVM then raid 1 to the remaining 3 hdd's.

View 7 Replies View Related

Software :: Secondary/Standalone Or Alternate Boot Drive?

Jan 26, 2010

I have Redhat linux 4 and 5, I need to know if there is a way to create an alternate/standalone boot drive to recover a system from a disaster.

View 1 Replies View Related

CentOS 5 :: Properly Unmounting RAID Drive In Recovery Mode?

Mar 9, 2010

I had a corrupted superblock in my RAID boot drive (/dev/md0), which I fixed with fsck in Recovery Mode.

However, after rebooting, the same boot-up problem (hanging for hours) persists. When I re-enter Recovery Mode to examine the boot drive, I found its superblock was corrupted again (which I fixed again using fsck).

Is there a proper reboot procedure which is gentle on the boot drive, such that it doesn't corrupt it? Or, is something else amiss?

View 1 Replies View Related

CentOS 5 :: Dual Boot With Windows RAID And Drive Combination

Jan 8, 2011

I have a pre-existing setup with Windows XP Professional and CentOS5.5 on a dual boot setup with the Linux drive setup as the primary drive hosting the grub menu.

I am replacing these machines with new updated ones and they have windows setup on a RAID0. I think it would be easiest to follow my previous setup and move the RAID to secondary SATA ports and put the linux drive on the primary SATA port, or should I just change the boot order in the BIOS to have the secondary linux drive boot first?

can I move a RAID setup to secondary controller ports without breaking the RAID?

View 1 Replies View Related

Server :: Convert Single Drive Ubuntu Server To RAID 0

May 28, 2010

I am running single drive Ubuntu server 9.10 with a lot of software. Now I want to add one more disk (same size and type) and to convert this to RAID 0 without need of reinsallation. Is it possible and if yes how? I didn't find nothing for RAID 0. It sounds simple, but probably is not.

View 4 Replies View Related

Fedora :: Finding A Backup Failing Laptop To Standalone Drive?

Feb 11, 2010

I am running Live 12 on my CD rom drive of my dying laptop. I have a major Windows registry error on that system and am working to recover my files. I have successfully moved a couple of folders from the laptop to my Seagate Free Agent Drive as a test.What I would like to know is, is there a way to copy my files and folders without literally dragging and dropping each one? We're talking 140 G of folders....sigh.

View 1 Replies View Related

Server :: Setup RAID 1 On CentOS 5 Server For A Zimbra Email Server

Feb 7, 2011

I'm trying to setup RAID 1 on a CentOS 5 server for a zimbra email server.I get a partion schema error. Can I do this?The server is a HP Proliant ML150 G3 server with two 80GB HDD.

View 1 Replies View Related

Server :: Dual Drive Failure In RAID 5

May 22, 2009

I *had* a server with 6 SATA2 drives with CentOS 5.3 on it (I've upgraded over time from 5.1). I had set up (software) RAID1 on /boot for sda1 and sdb1 with sdc1, sdd1, sde1, and sdf1 as hot backups. I created LVM (over RAID5) for /, /var, and /home. I had a drive fail last year (sda).After a fashion, I was able to get it working again with sda removed. Since I had two hot spares on my RAID5/LVM deal, I never replaced sda. Of course, on reboot, what was sdb became sda, sdc became sdb, etc.So, recently, the new sdc died. The hot spare took over, and I was humming along. A week later (before I had a chance to replace the spares, another died (sdb).Now, I have 3 good drives, my array has degraded, but it's been running (until I just shut it down to tr y.

I now only have one replacement drive (it will take a week or two to get the others).I went to linux rescue from the CentOS 5.2 DVD and changed sda1 to a Linux (as opposed to Linux RAID) partition. I need to change my fstab to look for /dev/sda1 as boot, but I can't even mount sda1 as /boot. What do I need to do next? If I try to reboot without the disk, I get insmod: error inserting '/lib/raid456.ko': -1 File existsAlso, my md1 and md2 fail because there are not enough discs (it says 2/4 failed). I *believe* that this is because sda, sdb, sdc, sdd, and sde WERE the drives on the raid before, and I removed sdb and sdc, but now, I do not have sde (because I only have 4 drives) and sdd is the new drive. Do I need to label these drives and try again? Suggestions? (I suspect I should have done this BEFORE failure).Do I need to rebuild the RAIDs somehow? What about LVM?

View 6 Replies View Related

CentOS 5 :: Enable Repo Media For Standalone System

Jun 3, 2010

I've installed a standalone system (no internet connection), and now I would like to add more software from the 7CD I used for the clean installation. I've enabled the 'media-repo' and disabled any other, but when I go to the main menu '->remove/install software' the System shows the message 'Can't find repomd.xml file...' and stops.

What can I do to enable this method?

View 1 Replies View Related

CentOS 5 :: Permanently Remove Drive From Md Array (RAID1)

May 14, 2011

I installed a distro based on CentOS 5.5 (FreePBX distro FYI). It used an automated kickstart script to create an md RAID1 array of all the hard drives connected to the machine. Well, I installed from a thumb drive, which the script in interpreted as a hard drive and thus included in the array. So, I ended up with three md arrays (boot, swap, data) that included the thumb drive. Even better, it used the thumb drive for grub boot so I couldn't start up without it. I was able to mark the USB drive as 'failed' and remove from each array, and even change grub around to boot without the usb drive, but now each of the arrays is marked as degraded:


View 1 Replies View Related

Server :: MDADM Raid 5 Array - OS Drive Failure?

Jun 7, 2011

I have 4 WD10EARS drives running in a RAID 5 array using MDADM.Yesterday my OS Drive failed. I have replaced this and installed a fresh copy of Ubuntu 11.04 on it. then installed MDADM, and rebooted the machine, hoping that it would automatically rebuild the array.It hasnt, when i look at the array using Disk Utility, it says that the array is not running. If i try to start the array it says :Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error

mdadm: Not enough devices to start the array.I have tried MDADM --assemble --scan and it gives this output:mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.I know that there are 4 drives present as they are all showing, but it is only using 2 of them.I also ran MDADM -- detail /dev.md0 which gave:

root@warren-P5K-E:~# mdadm --detail /dev/md0
Version : 0.90


View 11 Replies View Related

Server :: Software Raid Mdadm Can't Bring Up Drive

Jul 27, 2011

I have a raid 5 array that appears to have died. I was just routinely looking at /var/log/messages and noticed that a drive in the array was complaining (via SMARTD).

This is the /home directory, and so is backed up, so it's not critical, but I'd like to get some things that changed after the last backup (the week before I noticed the failure)

Let me start by outlining what I know :

It's a 2TB array spread over three disks (mdadm software RAID5), here are the drives:

MDADM gives the drives :

Now, the array *WAS* up ok, but I umounted it. (in which /dev/md0 was mounted to /home) Yes,I know - I didn't want any changes being made to the array by anything - at least that was my thinking at the time. In hindsight... I would have killed any processes, locked out the server, backed up again and 'then' unmounted it.

But we are where we are, I'm sure there'll be time for recriminations later.

When I try to remount it, I get :

Ok - looks like it's lost the type - it's normally worked, maybe we'll give it a little hint - it's ext3 with a journal.

When I tell it it's an ext3, I get :

Now, before I go charging off specifying superblocks further along the disk, but I can't remember where they're stored.

Neither can I recall what the blocksize I originally created the array as (I have a feeling I specifed 4K, but I could be wrong).

debugfs is only telling me :

I should also point out that this server is hosted, so it's 150 miles away from me at the moment, so I can't just whip them out and dd a copy.

View 11 Replies View Related

Server :: How To Access Auto Raid Hard Drive Component?

Dec 22, 2010

i have harddrive on which raid 5 is configure and no file system is configured.so i want to access the data on auto raid component harddisk.could any one telme how to access auto raid component hard drive.when im connectingto my laptop its not opening.when i check in disk analyzer its showing auto raid component harddrive.please helpme to access data inside the raid drive.

View 2 Replies View Related

CentOS 5 :: How To Partition / Format And Mount 10TB Arrays

Nov 17, 2009

I currently have an array with 8TB, that will eventually grow to 10TB+ with 2TB disks. How would I partition, format, then mount an array of this size, and also have room to grow, when I add new disks? I heard ext3 only supports 8tb? So would lvm be the best use here?

View 2 Replies View Related

Copyrights 2005-15 www.BigResource.com, All rights reserved