General :: RAID1 Resync Slow Using Mdadm?
Mar 11, 2010
I am running Kernel 2.6.18-128.el5 on a 64bit quad core machine with 8GB RAM. Using "mdadm" I setup a RAID1 array between two Western Digital 1.5TB drives. The problem is that the resync is running VERY slow. Here is a current status.
[root@royalflush shared]# cat /proc/mdstat Personalities : [raid1]
md1 : active raid1 hdc5[1] hda5[0]
1304493952 blocks [2/2] [UU]
[=>] resync = 6.2% (81592192/1304493952) finish=4280156.0min speed=4K/sec
unused devices: <none>
View 1 Replies
ADVERTISEMENT
Jun 9, 2010
I have two 80 GB IDE hard disk. I have create raid1 partition in both drive using [URL] ink. raid is working fine. But i have copy some data on one hard disk (md0) but this data is not autometically copy in second hard disk(md1). I want when data is write on one hard disk, this data autometically write in second hard disk.
View 11 Replies
View Related
Jul 25, 2011
how can I stop resyncing permenently & how can I check whether the normal sata HDDs can support RAID before/after buying HDDs. Because on every saturday or sunday resync is starting itself even there is no entry about resncing in crontab. But if I run "cat /proc/mdstat" it is showing RAID1 is perfect. see the below output
#cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb1[1] hda1[0]
513984 blocks [2/2] [UU]
[code]....
View 5 Replies
View Related
May 12, 2010
I'm looking to recover a RAID1 array hopefully using mdadm. Ive not really used Linux much befor but I'm keen to learn to get my data back. Basically one of the disks in my Maxtor Shared Storage II (2x500GB sata) died and I could do with either rebuilding the array or getting the data off another way.
I have a spare machine I could use for recovery process. It has a spare drive but its only 120Gig, I also have a bigger 320gig disk but thats IDE not SATA. Do I need to purchase another 500GB sata drive or can I use either of my spares? If i do need to buy a new drive could I use a 1TB or 1.5TB or will it have to be 500? Next question is what is that best version of linux to use, I have knoppix 6.2 and Ubuntu (not sure on version) already. I noticed that mdadm isn't installed by default on Ubuntu.
View 1 Replies
View Related
Nov 30, 2010
I am learning software raid 1 with centos 5.5. I created the raid with out any problems and removed the first drive to check there was no problems and it booted. I have installed the old drive back in the system as hdc and need to resync the drives (used old drive as partitions correct) I thought I could use raidhotadd but id does not seem to exist anymore. how I resync the drives in the array hda primary and hdc secondary using mdadm
View 1 Replies
View Related
Jan 28, 2010
I'm trying to set up a RAID1 partition on my Ubuntu 9.10 workstation.On this dual-boot system, Ubuntu is running from a separate drive (/dev/sdc - an SSD that is quite small, which is why I need more disk space). Besides that, there are two traditional 500 GB hard drives, which have Windows 7 installed (I want to keep the Windows installation intact), and about half of the space unallocated. This space is where I want to set up a single, large RAID1 partition for Linux.
(This, to my understanding, would be software RAID, whereas the Windows partitions are on hardware RAID - I hope this isn't a problem... Edit: See Peter's comment. I guess this shouldn't be a problem since I see both drives separately on Linux.)On both disks, /dev/sda and /dev/sdb, I created, using fdisk, identical new partitions of type "Linux raid autodetect" to fill up the unallocated space.
Device Boot Start End Blocks Id System
/dev/sda1 1 10 80293+ de Dell Utility
/dev/sda2 * 11 106 768000 7 HPFS/NTFS
[code]....
But so is "Device or resource busy" when trying to create the RAID array. Quite strange.
Update: Could the device mapper have something to do with this? How do /dev/mapper and dmraid relate to all this mdadm stuff anyway? Both provide software RAID, but.. differently? Sorry for my ignorance here. Under /dev/mapper/ there are some device files that, I think, somehow match the 3 Windows RAID partitions (sd{a,b}1 through sd{a,b}3). I don't know why there are four of these arrays though.
$ ls /dev/mapper/
control isw_dgjjcdcegc_ARRAY1 isw_dgjjcdcegc_ARRAY3
isw_dgjjcdcegc_ARRAY isw_dgjjcdcegc_ARRAY2
View 2 Replies
View Related
Oct 14, 2010
I'm having issue with raid6. I already created a thread in the "Linux -General" forum, but it seems, there is no right audience
[URL]
View 1 Replies
View Related
Sep 20, 2010
I have a low power machine I use as an SFTP server.It currently contains two raid 1 arrays, and I am working on adding a third. However, I'm having a bit more trouble with this array than I did the prior arrays.My suspicion is that I have a bad drive, I am just not sure how to confirm it.I have successfully formatted both drives with EXT3 and performed disk checks on both which did not indicate a problem.
I can see it progressing in the blocks count, but it's incredibly slow. In the course of 5 minutes it progressed from 1024/1953511936 to 1088/1953511936.Checking top, not even 10% of my CPU is being used. Are there any other performance items I could check that could be affecting this?
View 7 Replies
View Related
Aug 1, 2010
I have ClearOS (CentOS) installed. I have 2 x 2TB SATA HDDs (hda & hdc). At installation time, I configured a RAID 1 (and LVM) between the two HDDs. After a power problem happened, the two HDD were re-syncing and I checked it using: watch cat /proc/mdstat The speed didn't exceed 2100 KB/s I tried the following with no change:
echo 50000 > /proc/sys/dev/raid/speed_limit_min
View 1 Replies
View Related
Mar 31, 2011
I've 2 servers (xen1 and xen2 - their hostnames) with perversion configuration below: Each server have 4 SATA disks, 1 Tb each.
16 Gb ddr3
debian squeeze x64 installed:
root@xen2:~# uname -a
Linux xen2 2.6.32-5-xen-amd64 #1 SMP Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux
Storage configuration: Former 256 Mb + 32 Gb of 2 of 4 disks are used as raid1 devices for /boot and swap respectively. The rest of space, 970 Gb on all 4 sata disks are used as raid10. There is LVM2 installed over that raid10. Volume group is named xenlvm (that servers are expected to use as xen 4.0.1 hosts, but the story is not about Xen troubles). / , /var, /home are located on logical volumes of small size (just found out I got mixed up with lv names and partitions, but that's not the problem, I think):
[Code]...
View 3 Replies
View Related
Aug 11, 2010
intending to set up an all-in-one server, i threw in the ubuntu server 10.04 (amd64) cd. during the text-install, i set up the device-topology below, and it worked.
[Code]....
then i tested my raid by hot-pulling off the sda wire (ouch). worked fine, system still worked, and it also managed rebooting from the left sdb (which of course showed up being sda, lacking the first drive). now i am trying to recover this pre-crash state. adding the first disk (showing up as sdb), i can add it to md0 and let it start syncronizing for 2 hours. but... i can?t boot anymore with the recovered first disk being sda...
at first, booting got stuck in an initrd-prompt after complaining it couldn?t find my sys-logical volume. after a lot of trial and error i don?t even get any complaints, just a black screen which would let me wait for a boot for weeks... so, my system does not boot from my first disk, whether i plug in the second or not. my second disk still boots. my last attempt to get booting fine again has been: zero sda?s first and last gigabyte to kill any ids duplicate sdb?s first cylinder to sda to make it bootable reinitialize sdb?s part.table using command o in fdisk for a new disk-id recreate sda1 partition add sda1 to md0
View 1 Replies
View Related
Nov 27, 2010
Posted this on the centos forum too, but I might get better attention here. I just moved my centos server to a mdadm raid1 array. I went partially after this guide: [URL].. What I did was to boot up a livecd and made three partitions on both of my empty disks, one for / one for swap and one for /vz (it's an openvz server). Made those partitions into seperate raid1 arrays and then rsync-ed everything from the old disk to the new partitions.
After I had moved everything I did chroot into the new raid array and edited both grub config files and fstab, according to the guide.
[Code]...
I have managed to run the system on the raid1 disks when using super grub2 disk off a cd, but it has it's own grub and can boot any distro, so I can see that the system is working fine, except for grub. I have tried installing grub both from a livecd (ubuntu 64bit) and when booted into the raid1 array, but it gives the same results as stated above.
View 2 Replies
View Related
Feb 7, 2011
I have a RAID1 array, where mdadm states that one of the disks is "removed." Naturally, I assume one of the drives has failed. The mdadm --detail command tells me that the sda drive has failed. However, further inspection from the mdadm -E /dev/sdb1 command says that sdb1 disk has been removed. I am a bit confused. Can someone clarify which drive is failed? Am I misreading the command outputs?
Code:
sudo fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
[Code]...
View 3 Replies
View Related
Nov 14, 2010
One of the hard drives in my server failed the other day, backups saved the day and downtime was only a few hours, but when setting up the new drive I went ahead and migrated to software RAID, in the hopes it may give me less downtime in the future when a drive fails. It all went rather well, but my main root partition won't finish syncing for some reason.
sda was the original drive with sda4 as /, sda1 as /boot, and sda2 as swap. sdb was the drive that failed and was replaced with the new drive. So I set up sdb with the same partitions of sda, added it to a RAID1 array, copied files from sda, and reboot to md4 as /, md1 as /boot, and md2 as swap. I added the sda partitions to the array, and the sync went off without a hitch on md1 and md2, md4 progresses well, but after a few hours /proc/mdstat just shows this:
Code:
root@d668:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb2[1] sda2[0]
9767424 blocks [2/2] [UU]
[Code]...
View 3 Replies
View Related
Aug 7, 2011
I'm convinced that mdadm is going to be the death of me. I've wasted numerous hours on this so far without luck.
OpenSuse 11.4 on an old Supermicro box, creating a software RAID1 array across 2 x IDE 500GB disks. Creating /dev/md0 as a 250MB partition across /dev/sda1 and /dev/sdd1 for /boot, another 465GB partition across /dev/sda2 and /dev/sdd2 as an LVM partition to hold volumes for the various other OS filesystems. After the initial installation and configuration there were a series of mishaps with faulty IDE cables that had drives failing to show up at boot. Somehow, /dev/sdd2 got configured to array /dev/md1 as a spare drive. And nothing I've done so far gets it to show up as an active drive.
The obvious step of failing the partition, removing it, then adding (or re-adding) will bring it back as a spare. I've tried roughly a dozen different permutations of those same steps. The latest was to 'dd if=/dev/zero of=/dev/sdd2' to clear the partition. Thought this might be the trick - after the zero, mdadm -E /dev/sdd2 reported 'no superblock' and no md1 configuration.
So 'mdadm --add /dev/md1 /dev/sdd2' and it still comes back as a spare. Here is mdadm -D /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Sat Jul 9 10:26:01 2011
Raid Level : raid1
Array Size : 488119160 (465.51 GiB 499.83 GB)
code....
I can't stop this array, the OS is running from there. I can't easily boot from CD to repair, all IDE ports have disks attached.
Does anyone have an incantation to promote a spare to active?
View 2 Replies
View Related
Jul 22, 2011
I have SLES10-SP3 running on an Intel SR1600URHS board with 3 hot-swap SATA disks configured using mdadm as Raid1 with hot spare. If I pull one of the active disks, all file i/o will stop for about 2.5 minutes after which it will start again and the raid array will be rebuilt using the spare disk. Is there any way I can reduce this 2.5 minutes of inactivity? I've tried setting /sys/block/sdX/device/timeout and /sys/block/sdX/device/retries to 1 for all disks, but this hasn't made any difference. The output from messages is:
12:11:56: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2 frozen
12:11:56: ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 cdb 0x1e data 0
12:11:56: res 40/00:03:00:00:20/00:00:00:00:00/b0 Emask 0x4 (timeout)
[code]....
View 1 Replies
View Related
Feb 27, 2011
I've faced the problem with server freeze on heavy write.
System
CentOS 5.5 x64_86 with latest updates and kernel (2.6.18-194.32.1). Also tried 2.6.18-194.26.1 and 2.6.37-2 from ELRepo with the same results.
CPU: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
Memory: 3 x 2Gb DDR3.
HDDs: 2 x Western Digital WDC WD1002FBYS-02A6B0
[Code].....
View 19 Replies
View Related
Jun 27, 2009
I have a used but good harddrive which I'd like to use as a replacement for a removed harddrive in existing raid1 array. mdadm --detail /dev/md00 0 0 -1 removed1 8 17 1 active sync /dev/sdb1I thought I needed to mark the removed drive as failed but I cannot get mdadm set it to "failed". I issue mdadm --manage /dev/md0 --fail /dev/sda1But mdadm response is:mdadm: hot remove failed for /dev/sda1: no such device or addressI thought I must mark the failed drive as "failed" to prevent raid1 from trying to mirror in wrong direction when I install my used-but-good disk. I want to reformat the good used drive first right? I believe I must prevent raid array from automatically try to mirror in the wrong direction.
View 7 Replies
View Related
Nov 2, 2010
I have ubuntu server 10.04 on a server with 2.8ghz 1gb ddr2 with the os on a 2gb cf card attached to the IDE channel and a software raid5 with 4 x 750gb drives. On a samba share using these drives I am only getting around 5 MB/s connected via wireless N at 216mbps and my router and server both having gigabit ports. Is a raid 5 supposed to be that slow? I was seeing speeds of anywhere from 20-50MB/s from other people and am just wondering what i am doing wrong to be so far below that.
View 4 Replies
View Related
Mar 3, 2010
I have a 4 drive RAID 5 array set up using mdadm. The system is stored on a seperate physical disk outside of the array. When reading from the array its fast but when writing to the array its extremely slow, down to 20MB/Sec compared to 125MB/Sec reading. It does a bit then pauses, then writes a bit more and then pauses again and so on.The test i did was to copy a 5GB file from the RAID to another spare non-raid disk on the system average speed 126MB/s. Copying it back on to the RAID (in another folder) the speed was 20MB/s.The other thing is very slow several KB/s write speed copying from eSATA drive to the RAID.
View 9 Replies
View Related
Feb 25, 2010
On the ReadyNAS Duo, a RAID resync synchronizes the entire contents of one disk to another. It's a slow operation. So it's not something to be done lightly. By default, it is done whenever the unit is shut down uncleanly (generally the result of a hang or power interruption). It's something that can be disabled, though, such that a resync would be done only when manually requested. The authors of the software RAID driver in Linux (called "md") came up with a nice solution to the problem of slow RAID resyncs: the md driver allows you to assign a write intent bitmap to an md device, and where it's stored is configurable (the default is to store it on the md device in question, in the superblock). So when the array goes down unexpectedly (e.g., in a power outage), only the blocks that the md device was going to write will actually be resynced.
It's sort of like a journal for the md device itself. Now, the ReadyNAS Duo doesn't seem to use Linux md at all for implementing the RAID volume, but it does have an option to disable automatic resync in the event of an improper shutdown. And that leads to my question: how necessary is it to do a resync in such an event? If the Duo uses the same method as the Linux md driver, then obviously such a resync would rarely be necessary. But if it doesn't, then it would be the equivalent of using the md driver without a write-intent bitmap, and a resync would almost certainly be necessary in the event of an improper shutdown. Does the Duo (and maybe other units) use the equivalent of a write intent bitmap for RAID write operations?
View 1 Replies
View Related
Nov 22, 2009
Here's a brief description of my system:
120GB Sata HDD - Primary OS drive
3 x 1.0TB Sata HDD - Raid 5 array
This is on a C2D MSI P35 Platinum board. Anyway, did a fresh install of F12 on the 120GB, which I had problems with - Anaconda refused to see the drive. Fedora Live could see it fine, and it was listed as an 'nvidia_raid_member' - no idea why, but I completely erased the disc under the Live CD and proceeded to install F12.
Once F12 was installed, I loaded up mdadm to re-activate my Raid 5 array, using 'sudo mdadm --assemble --uuidthe uuid) - and it started with only 2 of the 3 drives. My /dev/sdb drive did not activate into the array, due to what mdadm said was a mismatched UUID. Ok, so I erased /dev/sdb, intending to rebuild the array. Erased /dev/sdb, and then attempted 'sudo mdadm --add /dev/md0 /dev/sdb' and I get this error: "mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container" - I can find NO information on this error message.
[Code].....
I don't believe the hard drives are connected in the exact same order they were in before - I disconnected everything in the system and blew it out (it was pretty dusty)
View 1 Replies
View Related
Mar 27, 2011
So, I figure it's high time I learn to use this utility (or something else, doesn't matter to me). I want to backup a photo directory from an internal to external drive.
When I explore in Ubuntu, it names my external drive SimpleDrive . . . the destination directory on that drive I call Backup PhotoBank 31811. I think I had a leading zero that Ubuntu drops for the date (031811).My source drive is referred to by Ubuntu as 320 GB Filesystem, and the source directory on that drive is referred to as PhotoBank.
When I use these designations on the command line, rsync returns error messages stating that no such file or drive exists (note, I have not referenced any files within the directories).If I click on properties for the 320 GB Filesystem, there is another name designated, a very long combination of numbers and letters.I've tried using that name, but still get the same error messages from rsync.
Is there a front end for this program that is more graphical in nature? I don't mind using the command line, as I know that once I get this sorted out, it should be no problem, but I'm not too proud to resort to the ultimate in simplicity.
I am seeking to copy the files from the internal one time, then, I plan to delete the files from the internal, load it back up, copy the new files to the external, and so forth. I'm not really looking to sync the two drives.
View 9 Replies
View Related
Oct 17, 2010
I have 2 servers both with software raid 5 for the main storage partitions. 6 x 1.5 Tbyte drives. I observe that the these arrays seem to resync quite often. cat /proc/mdstat or smartctl -a does not show any problems with the drives. Is this normal. What is the default period between resyn. What would cause a resyn of the array. The standard resync time for the array is over 900 minutes which can become an inconvience for the server users if it occurs on a week day as it degrades the performance.
View 1 Replies
View Related
Jan 11, 2010
I am planning on setting up a 4x1TB RAID5 with mdadm under Ubuntu 9.10. I tried installing mdadm using "sudo apt-get install mdadm", all worked fine except for the following error: Code: Generating array device nodes... /var/lib/dpkg/info/mdadm.postinst: 170: /dev/MAKEDEV: not found failed. The end result is the /dev/md0 device has not been created, as can be seen here:
Code: windsok@beer:~$ mdadm --detail /dev/md0 mdadm: cannot open /dev/md0: No such file or directory After googling, I found the following bug which describes the issue: [URL] However it was reported way back in April 2009, and it does not look like it will be fixed any time soon, so I was wondering if anyone knows a workaround for this bug, to get me up and running?
View 4 Replies
View Related
Nov 9, 2015
I currently have a problem in running rsync on 64 bit Debian Jessie (although the problem also occurred with 64 bit Debian Wheezy)I am trying to use rsync to archive my home directory (which is on a hard disk) to a USB memory stick. The home directory is about 18GB in size and the memory stick has 32GB.
Unfortunately, rsync hangs after copying a certain number of files and the process eventually has to be killed. Rsync was rerun but hung again at about the same point as before.This has now happened several times. Each time the hang occurs at about the same point.Use of strace after the hang shows that rsync appears to be processing a pdf file at the time, although not always the same pdf file.I originally had the rsync hang problem on a PC which ran 64 bit Wheezy and which used a USB 2.0 port.
I now am running rsync on another PC, which runs 64 bit Jessie and which uses a USB 3.0 port..I have also tried three different USB sticks, two from one manufacturer and the third from another manufacturer.All give similar rsync hangs.
View 11 Replies
View Related
Oct 27, 2010
I have a hypothetical situation in which I installed my operating system using a RAID1 mirror. At some point I decided that this setup was overkill, my machine isn't system critical, I value doubling my storage space more than speedy recovery, I'm doing routine backups, etc...
Short of backing up my system volume and repartitioning, or otherwise starting over, is there a way I can reconfigure my RAID1 array to only expect one disk so that mdadm no longer reports a Degraded state?
View 3 Replies
View Related
Jan 13, 2011
I recently followed this guide to create a RAID1 [URL]... First I partitioned the disks with fdisk. I made the RAID array with
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1. Then I created the filesystem with mkfs.ext3 /dev/md0.
I then mounted md0 at /Video with mount /dev/md0 /Video/ All according to the guide.. Today I made a samba-share out of /Video/Rorschach to easily put files in there from my windows7-machine (the plan is to steam from my CentOS-server to my HTPC which hasn't arrived yet). I started to put movies in there. It went just fine for a while but then I got this message: [URL]... How is that even possible when df -h looks like this?:
[code]...
View 2 Replies
View Related
Jul 10, 2010
I been trying all day to boot debian on a lvm partition on a raid1. I have found some howtos but they only show how to do it for one or the other not both at the same time. Using those howtos I think I have grub2 setup right the problem is my kernel. It has support for both LVM and Raid built-in. I setup the raid and lvm partitions while running that kernel. But when I use it to boot up the system on the lvm/raid it gives a kernel panic.
The OS is by itself on an old disk sda1. The raid1 is on two other disks sdb1 & sdc1. It is divided into 2 logical partition vg-root & vg-media. I just copied the OS onto vg-root. Then tolled grub to boot to it. The grub entry is like so..I tried setting root=(md0) but that didn't work either. I'm pretty sure the problem is with the kernel but I don't see why since it can it can see the raid and lvm partitions once it is booted up and both the raid & lvm options are built into the kernel so it should be able to see them at boot time.
View 2 Replies
View Related
Aug 19, 2010
I have two HDs (let's say sda and sdb). Both are the same size and have the same partitions already (sda1/sda2/sda3 and sdb1/sdb2/sdb3). Basically they are ready to make a RAID1 array.
Writing with new udev rules, I could create and give fix HD labels with /sbin/scsi_id.
Example: For sdb1 I have a fix device name created under /dev as hd2_boot1, for sdb2 I have /dev/hd2_boot2 and finally for sdb3 I have created the device /dev/hd2_boot3.
With using the command "mdadm --create /dev/md0 --level=1 ....", I could create a RAID array.
But, when I check the status one of the RAID devices, like with the command "mdadm --detail /dev/md2", it still shows me as part of the RAID array the sdb* devices, not the hd2_boot* devices. Something like this:
I would like to see basically as member or the RAID array always the /dev/hd2_boot3 not the /dev/sdb3 (like above), is this possible?
Bottom line, I would like to keep the order of the RAID arrays depending their scsi ids, not depending their scsi numberings which is given by the kernel, since the scsi numberings (sda, sdb, sdc and etc.) can change depending the physical connection.
View 2 Replies
View Related