Server :: Mdadm Cannot Grow Raid1 Over Lvm?

Mar 31, 2011

I've 2 servers (xen1 and xen2 - their hostnames) with perversion configuration below: Each server have 4 SATA disks, 1 Tb each.

16 Gb ddr3
debian squeeze x64 installed:
root@xen2:~# uname -a
Linux xen2 2.6.32-5-xen-amd64 #1 SMP Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux

Storage configuration: Former 256 Mb + 32 Gb of 2 of 4 disks are used as raid1 devices for /boot and swap respectively. The rest of space, 970 Gb on all 4 sata disks are used as raid10. There is LVM2 installed over that raid10. Volume group is named xenlvm (that servers are expected to use as xen 4.0.1 hosts, but the story is not about Xen troubles). / , /var, /home are located on logical volumes of small size (just found out I got mixed up with lv names and partitions, but that's not the problem, I think):

[Code]...

View 3 Replies


ADVERTISEMENT

Server :: Mdadm Software Raid1 Failed Disk Detection Too Long

Jul 22, 2011

I have SLES10-SP3 running on an Intel SR1600URHS board with 3 hot-swap SATA disks configured using mdadm as Raid1 with hot spare. If I pull one of the active disks, all file i/o will stop for about 2.5 minutes after which it will start again and the raid array will be rebuilt using the spare disk. Is there any way I can reduce this 2.5 minutes of inactivity? I've tried setting /sys/block/sdX/device/timeout and /sys/block/sdX/device/retries to 1 for all disks, but this hasn't made any difference. The output from messages is:

12:11:56: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2 frozen
12:11:56: ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 cdb 0x1e data 0
12:11:56: res 40/00:03:00:00:20/00:00:00:00:00/b0 Emask 0x4 (timeout)

[code]....

View 1 Replies View Related

Debian :: Mdadm Create A Backup File After Grow Command

Feb 27, 2016

I launch a mdadm grow to expand my filesystem but I forget --backup-file.I have my server running and one UPS.

Code: Select allpk25.com:~# mdadm --manage /dev/md0 --add /dev/sde1
mdadm: added /dev/sde1
pk25.com:~# mdadm --grow --raid-devices=5 /dev/md0
mdadm: Need to backup 3072K of critical section..
pk25.com:~# mdadm --grow --raid-devices=5 --backup-file=/root/md0-grow.bak /dev/md0
mdadm: /dev/md0 is performing resync/recovery and cannot be reshaped

[code]....

View 0 Replies View Related

CentOS 5 Hardware :: Getting Errors When Running Mdadm In Grow Mode Failed

Jun 24, 2009

Is growing raid 6 in 5.3 centos possible? I'm getting errors when i run mdadm in --grow mode failed : 'mdadm --grow /dev/md0 -n 5 2>&1' -> mdadm: Cannot set device size/shape for /dev/md0: Invalid argument Do i have to create a custom kernel for centos?

View 1 Replies View Related

Ubuntu Servers :: Boot From Raid1 (mdadm) + Lvm

Aug 11, 2010

intending to set up an all-in-one server, i threw in the ubuntu server 10.04 (amd64) cd. during the text-install, i set up the device-topology below, and it worked.

[Code]....

then i tested my raid by hot-pulling off the sda wire (ouch). worked fine, system still worked, and it also managed rebooting from the left sdb (which of course showed up being sda, lacking the first drive). now i am trying to recover this pre-crash state. adding the first disk (showing up as sdb), i can add it to md0 and let it start syncronizing for 2 hours. but... i can?t boot anymore with the recovered first disk being sda...

at first, booting got stuck in an initrd-prompt after complaining it couldn?t find my sys-logical volume. after a lot of trial and error i don?t even get any complaints, just a black screen which would let me wait for a boot for weeks... so, my system does not boot from my first disk, whether i plug in the second or not. my second disk still boots. my last attempt to get booting fine again has been: zero sda?s first and last gigabyte to kill any ids duplicate sdb?s first cylinder to sda to make it bootable reinitialize sdb?s part.table using command o in fdisk for a new disk-id recreate sda1 partition add sda1 to md0

View 1 Replies View Related

General :: RAID1 Resync Slow Using Mdadm?

Mar 11, 2010

I am running Kernel 2.6.18-128.el5 on a 64bit quad core machine with 8GB RAM. Using "mdadm" I setup a RAID1 array between two Western Digital 1.5TB drives. The problem is that the resync is running VERY slow. Here is a current status.

[root@royalflush shared]# cat /proc/mdstat Personalities : [raid1]
md1 : active raid1 hdc5[1] hda5[0]
1304493952 blocks [2/2] [UU]
[=>] resync = 6.2% (81592192/1304493952) finish=4280156.0min speed=4K/sec
unused devices: <none>

View 1 Replies View Related

General :: Recover A RAID1 Array Using Mdadm

May 12, 2010

I'm looking to recover a RAID1 array hopefully using mdadm. Ive not really used Linux much befor but I'm keen to learn to get my data back. Basically one of the disks in my Maxtor Shared Storage II (2x500GB sata) died and I could do with either rebuilding the array or getting the data off another way.

I have a spare machine I could use for recovery process. It has a spare drive but its only 120Gig, I also have a bigger 320gig disk but thats IDE not SATA. Do I need to purchase another 500GB sata drive or can I use either of my spares? If i do need to buy a new drive could I use a 1TB or 1.5TB or will it have to be 500? Next question is what is that best version of linux to use, I have knoppix 6.2 and Ubuntu (not sure on version) already. I noticed that mdadm isn't installed by default on Ubuntu.

View 1 Replies View Related

Software :: Moved To Mdadm Raid1 And Now Having Grub?

Nov 27, 2010

Posted this on the centos forum too, but I might get better attention here. I just moved my centos server to a mdadm raid1 array. I went partially after this guide: [URL].. What I did was to boot up a livecd and made three partitions on both of my empty disks, one for / one for swap and one for /vz (it's an openvz server). Made those partitions into seperate raid1 arrays and then rsync-ed everything from the old disk to the new partitions.

After I had moved everything I did chroot into the new raid array and edited both grub config files and fstab, according to the guide.

[Code]...

I have managed to run the system on the raid1 disks when using super grub2 disk off a cd, but it has it's own grub and can boot any distro, so I can see that the system is working fine, except for grub. I have tried installing grub both from a livecd (ubuntu 64bit) and when booted into the raid1 array, but it gives the same results as stated above.

View 2 Replies View Related

General :: Creating A RAID1 Partition With Mdadm On Ubuntu?

Jan 28, 2010

I'm trying to set up a RAID1 partition on my Ubuntu 9.10 workstation.On this dual-boot system, Ubuntu is running from a separate drive (/dev/sdc - an SSD that is quite small, which is why I need more disk space). Besides that, there are two traditional 500 GB hard drives, which have Windows 7 installed (I want to keep the Windows installation intact), and about half of the space unallocated. This space is where I want to set up a single, large RAID1 partition for Linux.

(This, to my understanding, would be software RAID, whereas the Windows partitions are on hardware RAID - I hope this isn't a problem... Edit: See Peter's comment. I guess this shouldn't be a problem since I see both drives separately on Linux.)On both disks, /dev/sda and /dev/sdb, I created, using fdisk, identical new partitions of type "Linux raid autodetect" to fill up the unallocated space.

Device Boot Start End Blocks Id System
/dev/sda1 1 10 80293+ de Dell Utility
/dev/sda2 * 11 106 768000 7 HPFS/NTFS

[code]....

But so is "Device or resource busy" when trying to create the RAID array. Quite strange.

Update: Could the device mapper have something to do with this? How do /dev/mapper and dmraid relate to all this mdadm stuff anyway? Both provide software RAID, but.. differently? Sorry for my ignorance here. Under /dev/mapper/ there are some device files that, I think, somehow match the 3 Windows RAID partitions (sd{a,b}1 through sd{a,b}3). I don't know why there are four of these arrays though.

$ ls /dev/mapper/
control isw_dgjjcdcegc_ARRAY1 isw_dgjjcdcegc_ARRAY3
isw_dgjjcdcegc_ARRAY isw_dgjjcdcegc_ARRAY2

View 2 Replies View Related

Ubuntu Servers :: Interpreting Mdadm RAID1 Status?

Feb 7, 2011

I have a RAID1 array, where mdadm states that one of the disks is "removed." Naturally, I assume one of the drives has failed. The mdadm --detail command tells me that the sda drive has failed. However, further inspection from the mdadm -E /dev/sdb1 command says that sdb1 disk has been removed. I am a bit confused. Can someone clarify which drive is failed? Am I misreading the command outputs?

Code:
sudo fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes

[Code]...

View 3 Replies View Related

Ubuntu Servers :: Mdadm Raid1 Doesn't Finish Syncing?

Nov 14, 2010

One of the hard drives in my server failed the other day, backups saved the day and downtime was only a few hours, but when setting up the new drive I went ahead and migrated to software RAID, in the hopes it may give me less downtime in the future when a drive fails. It all went rather well, but my main root partition won't finish syncing for some reason.

sda was the original drive with sda4 as /, sda1 as /boot, and sda2 as swap. sdb was the drive that failed and was replaced with the new drive. So I set up sdb with the same partitions of sda, added it to a RAID1 array, copied files from sda, and reboot to md4 as /, md1 as /boot, and md2 as swap. I added the sda partitions to the array, and the sync went off without a hitch on md1 and md2, md4 progresses well, but after a few hours /proc/mdstat just shows this:

Code:
root@d668:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb2[1] sda2[0]
9767424 blocks [2/2] [UU]

[Code]...

View 3 Replies View Related

OpenSUSE Install :: Mdadm - Change Spare To Active In RAID1 Array?

Aug 7, 2011

I'm convinced that mdadm is going to be the death of me. I've wasted numerous hours on this so far without luck.

OpenSuse 11.4 on an old Supermicro box, creating a software RAID1 array across 2 x IDE 500GB disks. Creating /dev/md0 as a 250MB partition across /dev/sda1 and /dev/sdd1 for /boot, another 465GB partition across /dev/sda2 and /dev/sdd2 as an LVM partition to hold volumes for the various other OS filesystems. After the initial installation and configuration there were a series of mishaps with faulty IDE cables that had drives failing to show up at boot. Somehow, /dev/sdd2 got configured to array /dev/md1 as a spare drive. And nothing I've done so far gets it to show up as an active drive.

The obvious step of failing the partition, removing it, then adding (or re-adding) will bring it back as a spare. I've tried roughly a dozen different permutations of those same steps. The latest was to 'dd if=/dev/zero of=/dev/sdd2' to clear the partition. Thought this might be the trick - after the zero, mdadm -E /dev/sdd2 reported 'no superblock' and no md1 configuration.

So 'mdadm --add /dev/md1 /dev/sdd2' and it still comes back as a spare. Here is mdadm -D /dev/md1

/dev/md1:
Version : 1.0
Creation Time : Sat Jul 9 10:26:01 2011
Raid Level : raid1
Array Size : 488119160 (465.51 GiB 499.83 GB)
code....

I can't stop this array, the OS is running from there. I can't easily boot from CD to repair, all IDE ports have disks attached.

Does anyone have an incantation to promote a spare to active?

View 2 Replies View Related

CentOS 5 :: Ex3fs On Mdadm/raid1 Freeze/lock-ups On Heavy Write?

Feb 27, 2011

I've faced the problem with server freeze on heavy write.

System

CentOS 5.5 x64_86 with latest updates and kernel (2.6.18-194.32.1). Also tried 2.6.18-194.26.1 and 2.6.37-2 from ELRepo with the same results.
CPU: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
Memory: 3 x 2Gb DDR3.
HDDs: 2 x Western Digital WDC WD1002FBYS-02A6B0

[Code].....

View 19 Replies View Related

Hardware :: Raid1 Mdadm Repair Degraded Array With Used Good Hard Drive?

Jun 27, 2009

I have a used but good harddrive which I'd like to use as a replacement for a removed harddrive in existing raid1 array. mdadm --detail /dev/md00 0 0 -1 removed1 8 17 1 active sync /dev/sdb1I thought I needed to mark the removed drive as failed but I cannot get mdadm set it to "failed". I issue mdadm --manage /dev/md0 --fail /dev/sda1But mdadm response is:mdadm: hot remove failed for /dev/sda1: no such device or addressI thought I must mark the failed drive as "failed" to prevent raid1 from trying to mirror in wrong direction when I install my used-but-good disk. I want to reformat the good used drive first right? I believe I must prevent raid array from automatically try to mirror in the wrong direction.

View 7 Replies View Related

Server :: Start Postfix Server Than Immediately Maillog Is Starting To Grow

Mar 18, 2010

I am running CentOS 5.4 and Postfix. So when I start Postfix server than immediately maillog is starting to grow. And the first lines that I see in it are:

[Code]...

Server is already in several blacklists and I desperately need to do something.

View 12 Replies View Related

Server :: Dynamically Grow The Ext3 File System In CentOs?

Nov 3, 2010

I have configured a "Syslog" server on /var directory as a separate ext3 partition - to receive the logs and events from the clients & the firewall as well. The directory needs to grow dynamically as the logs are populated. Is there a way i can make the filesystem grow dynamically as and when the directory is full.

View 6 Replies View Related

Server :: Reduce RAID5 Partition Sizes / Reduce The Size Of Md1 And Grow Md0?

Feb 14, 2010

I have a rack of four 1TB drives all partitioned identically with three primary partitions. On each drive

- the first partition is only 64MB;
- the second is a large 900GB partition and
- the last holds all the remaining space

mdadm has been used to set up
/dev/md0 - RAID1, comprised of /dev/sda1 and /dev/sdb1
/dev/md1 - RAID5, comprised of /dev/sda2, /dev/sdb2, /dev/sdc2, /dev/sdd2
/dev/md2 - RAID5, comprised of /dev/sda3, /dev/sdb3, /dev/sdc3, /dev/sdd3

OK, so it was a silly mistake to make - but I am now need to increase the size of /dev/md0. My thinking is to reduce the size of md1 so that I can grow md0.

On md1 I have two logical volumes. I've successfully reduced the size of the volume so that I can reduce the size of md1. Now I'm at the nervous stage; I can find little written on the topic of shrinking RAID5 arrays - and even if I do this I'm unsure if I can move partitions around to regain the space I so desire.

View 1 Replies View Related

Server :: System Image Of Intel Server RAID1?

Aug 2, 2011

I have an Intel server, which has it's two SATA HDD's in "Intel Embedded Server RAID Technology 5.4" RAID1 volume. How to proceed with a system image in case two of those SATA HDD's fail at the same time? Should one take the first HDD of RAID1 volume, connect it to another machine and execute:

Code:

# ddrescue /dev/sda1 /media/External/image_of_first_hdd /media/External/log_of_first_hdd
* HDD from the problematic RAID1 volume would be recognised as /dev/sda1 behind new machine
* /media/External/ is a mount point for large external HDD in the new machine
* log_of_first_hdd would be the log file

..and then take the second HDD to another machine and execute:

Code:

# ddrescue /dev/sda1 /media/External/image_of_second_hdd /media/External/log_of_second_hdd

how to make system image using ddrescue in case disks are in "Intel Embedded Server RAID Technology 5.4" RAID1?

View 1 Replies View Related

Server :: Best Way To RAID1 The Boot Drive?

May 17, 2010

I am self teaching everything I need to develop a home-based web server (linux/apache/php/mysql/html/css/etc...) It's quite an undertaking, but not beyond my abilities. I thought this question could have gone in either the linux - software or linux - hardware forum, and certainly not in the n00b section, but I figured it's best be put in the linux - server forum, since that's what this is related to.

I have been looking into the software and hardware RAID solutions for linux because I wanted to make sure that the boot drive of the web server I set up is mirrored with transparent disk fail/replace/recovery. I mean, setting up a boot drive for RAID1 sounded perfectly logical to me, and why wouldn't it to anybody else? So, since I knew RAID controllers were expensive, I looked into the native software RAID support in linux. My findings have revealed an issue with software raiding a boot drive in not only linux but windows as well. Apparently, if the primary drive fails (not the mirror), you have no other option but to power down the system to properly replace the failed disk, reboot, play some config crap, resync the drive, do some more config crap, reboot again, and -hopefully- it'll be ok. Well, that procedure is simply out of the question since the idea behind RAID is to transparently proceed as if nothing happened.

I'd like to know if it's even possible to RAID1 the boot drive for transparent and automatic fail/hot-swap/recover WITHOUT rebooting the system and with no intervention on my part other then replacing the drive whether it be a software raid or hardware raid solution. Eventually, what I'd like to do for a drive configuration is have 3 RAID volumes on the server configured like so:

RAID volume 1 = boot drive w/ webserver installed
RAID volume 2 = database files
RAID volume 3 = flatfile storage
Each raid volume will be a RAID1 of a 1TB drive (total = 6 x 1TB drives)

I've seen a lot of people having failure issues with the software RAID in these forums. Is this more common than not? I'm certainly not opposed to buying a hardware RAID solution as long as they're reliable and provide transparent/automatic recovery. So what's the best way to RAID1 the boot drive for transparent/automatic failover?

View 4 Replies View Related

Fedora :: MDADM On 12 64bit - Error "mdadm: Cannot Add Disks To A 'member' Array, Perform This Operation On The Parent Container"

Nov 22, 2009

Here's a brief description of my system:

120GB Sata HDD - Primary OS drive
3 x 1.0TB Sata HDD - Raid 5 array

This is on a C2D MSI P35 Platinum board. Anyway, did a fresh install of F12 on the 120GB, which I had problems with - Anaconda refused to see the drive. Fedora Live could see it fine, and it was listed as an 'nvidia_raid_member' - no idea why, but I completely erased the disc under the Live CD and proceeded to install F12.

Once F12 was installed, I loaded up mdadm to re-activate my Raid 5 array, using 'sudo mdadm --assemble --uuidthe uuid) - and it started with only 2 of the 3 drives. My /dev/sdb drive did not activate into the array, due to what mdadm said was a mismatched UUID. Ok, so I erased /dev/sdb, intending to rebuild the array. Erased /dev/sdb, and then attempted 'sudo mdadm --add /dev/md0 /dev/sdb' and I get this error: "mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container" - I can find NO information on this error message.

[Code].....

I don't believe the hard drives are connected in the exact same order they were in before - I disconnected everything in the system and blew it out (it was pretty dusty)

View 1 Replies View Related

Server :: Lvm Ontop Of Raid10 Or Combine Two Raid1 Via Lvm?

Sep 9, 2009

I'm planning to setup a ubuntu file server. I'll be using the 8.04LTS server edition. the system is probably going to have 4 harddrives. at the end they shall form an software RAID10 system. I'd like to use lvm at some point in order to able to make snapshots as I read through some mdadm and lvm docu/tutorials I could think of two possible setups:

in both cases:

small raid1 of 2 partitions that will form /boot
small raid1 of 2 different partitions as swap space

1. the rest will form 2 large raid1, which will be combined to a single virtual drive via lvm

2. make a raid10 out of the rest with mdadm, then make a lvm volume group just consisting of the 1 virtual raid0 device are there pros/cons for either solution? is lvm as powerfull as mdadm in striping? will the first solution produce less overhead?

View 3 Replies View Related

Server :: Raid1 On Debian Won't Boot After Growing?

Apr 21, 2010

I installed a raid1 on a debian lenny box with only 1 drive "--raid-devices=1" because I didn't have the other drive yet. When I got the other drive, I used "mdadm --grow /dev/md0 --raid-devices=2" and "mdadm --manage /dev/md0 --add /dev/sdb1" The original drive is sda1. I watched /proc/mdstat until it was completely synced, and after a reboot, the system will not reassamble the raid. It fails with "mdadm: no devices found for /dev/md0" This is where root is, therefore, I get nowhere. From a rescue cd I can disable the other drive and shrink back down to 1 device and it boots fine.

View 1 Replies View Related

Server :: Weekly Once Resync Is Starting Itself In RAID1

Jul 25, 2011

how can I stop resyncing permenently & how can I check whether the normal sata HDDs can support RAID before/after buying HDDs. Because on every saturday or sunday resync is starting itself even there is no entry about resncing in crontab. But if I run "cat /proc/mdstat" it is showing RAID1 is perfect. see the below output

#cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb1[1] hda1[0]
513984 blocks [2/2] [UU]

[code]....

View 5 Replies View Related

CentOS 5 :: Server Won't Boot - RAID1 Errors?

Aug 9, 2010

Not sure on what is going on here. The server is RAID1 through hardware RAID. It was running an unusual high load so I rebooted it. Now it won't boot up. I am getting these errors after the CentOS boot screen:sda: Current [descriptor]: sense key: Medium ErrorAdd.Sense: Address mark not found for data field

end_request: I/O error, dev sda, sector 3040555357
device-mapper: raid1: A read failure occurred on a mirror device.
device-mapper: raid1: All sides of mirror have failed.

[code]....

View 10 Replies View Related

Server :: Can't Assemble RAID5 With Mdadm

Jun 14, 2011

I cant seem to get my RAID 5 (consisting of 8 1tb hard drives) assembled for some reason and I have no idea why and cant find any solutions online. Ill go ahead and show what my problem is:

here is all my hard drives:

Code:
server:~$ sudo fdisk -l
Disk /dev/sda: 10.2 GB, 10242892800 bytes
255 heads, 63 sectors/track, 1245 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0004f041

[Code]....

So as you can see the array for those last four look fine however for the first four it marks the last four drives as faulty for some reason. I am kind of clueless to do from this point on honestly, I have data on this array that I'd really like to save.

View 3 Replies View Related

Server :: Configure Mdadm W/ Hotspare?

Mar 11, 2010

I have a machine that supports 4 S-ATA drives. I have all identical drives in each slot. I am asking for someone to please tell me how I can create a RAID5 array on Linux and then also have the 4th drive (/dev/sdd) as a hot spare for any of the three drives in the array / volume?I did the following:

/device = type @ parition size
/dev/sda1 = fd @ 100 MB (bootable for /boot)
/dev/sda2 = fd @ 1 GB (Going to be used for Swap)

[code]....

View 2 Replies View Related

Debian Configuration :: What Can Prevent Setting Up RAID1 On Server

Jan 7, 2011

We have the following server at collocation: [URL]

Provider's technicians were working for 3 hrs but finally were unable to set up hardware RAID1 on it.

What could prevent them from doing it? Is it difficult to set up RAID1? It is mentionned as basic function in specifications.

They said debian not booting after raid configured...

View 2 Replies View Related

Server :: Backup Server - With RAID1 - 14

Mar 6, 2011

I am a college student (compSci) that moves around a lot with a laptop. I back it up often, but I dont want a simple usb hd that can be stolen from my dorm and/or damaged (its already been damaged). I am making a file server with RAID 1 that will sit at my parents house for safer backups. I just need a few pointers, I have never experimented with RAID before.

Software: Fedora 14 - Software RAID 1. I will only have ssh running on a port other than 22, behind a router, with keyed entry only so I can remotely backup my stuff.

Hardware: A new(ish) P4 mobo with two (2TB) hd's (for RAID 1) and one small hd for the OS.

My questions:
1) Should I have the OS installed on a separate drive or on the two RAID drives? I am using software RAID, not hardware, so I assume I need two external drives for the RAID.

2) Should I be using more then two hd's for a RAID 1 array?

3) How can I encrypt the RAID drives? As I said before, I have no experience with RAID.

4) If the OS drive fails, can I just grab a new hd and install Fedora on it to get the data off my RAID array? Or do I need to image the Fedora drive every so often?

5) If one of the RAID drives fail, is there some sort of daemon that can tell me? I will not be at my house physically, so I will not be able to hear scratching platters :P. Also, because the size of a single disk in the array is 2 TB, can I just go out and get any kind of 2 TB drive to replace the failed one?

6) If the MoBo fails, can I just pop in a new one (of any kind) and continue using my same array?

View 1 Replies View Related

Server :: Get Summary Overview Of All The Md-devices At Once With Mdadm

May 13, 2010

I'm performing some experiments on a box with 5x36,4G disks. Right now I have partitioned all the drives in chunks of 1G, and put them together 3 by 3 in raid5. (Yeah, there's a lot of md-devices..). 'mdadm --query' and 'mdadm --detail' give summary/detailed information about the md's, on condition that I specify the device.

My question is if there would be a way to have a summary overview of all the md-devices at once. I've gone through the manual, but haven't found anything which resembles a useful command.

View 2 Replies View Related

Server :: Weird With Software Raid-5 Mdadm?

Feb 20, 2011

My only goal is to have a raid-5 that auto-assembles and auto-mounts. Hardware: 4*2TB sata (raid disks), 1*500GB IDE (OS disk), 1*DVD IDE all plugged direct into the motherboard (nForce 750i SLI).

Starting partitions on the raid disks: gpt ext4 The problem occurs when I restart my comp after building it for the first time. I am able to see it assemble, I am able to partition it, I even mounted it Once.This is the second time I've built it so I have watched everything that happened. I don't know if this has anything to do with my problem, but when I created the raid my drive designations were: sda - 500GB(OS), sd[bcde] - 2TB(raid). When I restarted: sd[abcd] - 2TB(raid), sde - 500GB(OS).

[Code]...

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved