Software :: RAID 5 With Even Number Of Drives Gives Bad Write Performance
Oct 27, 2010
So I have been doing some RAID 5 performance testing and am getting some bad write performance when configuring the RAID with an even number of drives. I'm running kernel 2.6.30 with software based RAID 5. This seems rather odd and doesn't make much since to me. For RAID 0 my performance consistently increases as I add more drives, but this is not the case for RAID 5. Does anyone know why I might be seeing lower performance when constructing my RAID 5 with 4 or 6 drives rather than 3 or 5?
I wasn't sure where to post this question so administrators, feel free to move it.I have a media server I set up running Ubuntu 10.4 Server, and I set up a software raid 5 using 5 Western Digital Caviar Green 2TB 7200RPM 64MB drives. Individually they benchmark (using the Ubuntu's mdadm GUI (pali?somthing...) at about 100-120mb/s read write.I set the raid 5 up with a stripe size of 256kb, and then I waited the 20 hours it took to synchronize. My read speeds in raid are up to 480mb/s, but my write max is just under 60mb/s. I knew my write performance would be quite a bit lower than my read, but I was also expecting at least single drive performance. I have seen other people online with better results in software, but have been unable to achieve the results they have gotten.
My bonnie++ results are more or less identical (I used mkfs.ext4 and set the stride and stripe-width).The PC has 2048mb of RAM and a 2.93Ghz Dual Core Pentium (Core 2 Architecture), so I doubt think that's the bottle neck. These drives are on the P55 (P45*) South Bridge SATA controller.
I'm currently experiencing some serious issues with WRITE performance on a RAID-1 array. I'm running Ubuntu 10.04 64 bit server with the latest updates. To evaluate the performance ran the following test: [URL]... (great article btw!) Using dd to measure, write performance is only at 8.7 MB/s. Read is great though at 74.5 MB/s. The tests were ran straight after rebooting and I have not (YET!) done any kernel tuning or customization, running the default server package of the Ubuntu kernel. Here's the motherboard in the server: [URL]... with a beta bios to support drives over 300GB.
[code]...
As you can see from the bo column there is definitely something stalling. As per top output, the %wa (waiting for i/o) is always around %75 however as per above, writes are stalling. CPU is basically idle all the time. Hard drives are quite new and smartctl (smartmontools) does not detect any faults.
I have recently migrated my file server over to a HP Microserver. The server has two 1TB disks, in a software RAID-1 array, using MDADM. When I migrated simply moved the mirrored disks over, from the old server Ubuntu 9.10 (server) to the new one 10.04.1 (server).I Have recently noticed that write speed to the RAID array is *VERY* slow. In the order of 1-2MB/s order of magnitude (more info below). Now obviously this is not optimal performance to say the least. I have checked a few things, CPU utilisation is not abnormal (<5%) nor is memory / swap. When I took a disk out and rebuilt the array, with only one disk (tried both) performance was as to be expected (write speed >~70MB/s) The read speed seems to be unaffected however!
I'm tempted to think that there is something funny going on with the storage subsystem, as copying from the single disk to the array is slower than creating a file from /dev/zero to the array using DD..Either way I can't try the array in another computer right now, so I though I was ask to see if people have seen anything like this!At the moment I'm not sure if it is something strange to do with having simply chucked the mirrored array into the new server, perhaps a different version of MDADM? I'm wondering if it's worth backing up and starting from scratch! Anyhow this has really got me scratching my head, and its a bit of a pain! Any help here would be awesome, e-cookies at the ready! Cheers
i have cretaed RAID on one of my server RAID health is ok but its shows warning. so what could be the problem. WARNING: 0:0:RAID-1:2 drives:153GB:Optimal Drives:2 (11528 Errors)
I'm breaking into the OS drive side with RAID-1 now. I have my server set up with a pair of 80 GB drives, mirrored (RAID-1) and have been testing the fail-over and rebuild process. Works great physically failing out either drive. Great! My next quest is setting up a backup procedure for the OS drives, and I want to know how others are doing this.
Here's what I was thinking, and I'd love some feedback: Fail one of the disks out of the RAID-1, then image it to a file, saved on an external disk, using the dd command (if memory serves, it would be something like "sudo dd if=/dev/sda of=backupfilename.img") Then, re-add the failed disk back into the array. In the event I needed to roll back to one of those snapshots, I would just use the "dd" command to dump the image back on to an appropriate hard disk, boot to it, and rebuild the RAID-1 from that.
Does that sound like a good practice, or is there a better way? A couple notes: I do not have the luxury of a stack of extra disks, so I cannot just do the standard mirror breaks and keep the disks on-hand, and using something like a tape drive is also not an option.
so I setup a raid ten system and I was wondering what that difference between the active and spare drives is ? if I have 4 active drives then 2 the two stripes are then mirrored right?
I built a RAID5 storage array using 'mdadm' on 3x WD Green 1TB hard drives. I used the Disk Utility GUI to create the array and it took about 24 hours to build. When I started copying files to it I noticed it performed at decent speeds for a while, then got really slow, the sped back up. Just for laughs I ran the Read-Only Benchmark function in the Disk Utility and got a graph that would even confuse stock brokers.Any thought on what the issue might be? I tried searching around for an answer, but most people are only affected by wirte performance issues not reading.
I've been having some problems copying files to USBs. If I'm copying a large (100MB+) amount of data, at random points the transfer will just stop for 30+ seconds before continuing. Sometimes it doesn't start up again at all. Consequently, the write speed drops to less than 1MB/sec, sometimes as low as 100 KB/sec. I do not have these problems on Windows 7, where I achieve speeds of ~16 MB/sec easily. I have had the same results with several USBs (2-32 GB) on several file systems (fat32, ext2) with several different computers running fully patched versions of Ubuntu 10.04, which suggests the problem is related to the way the OS accesses the hardware.
I am experiencing disk write performance issues and I cannot find the cause. I have LSI-9211-8i SAS 2 controller (latest firmware), Centos 5.5 latest x86_64 kernel (2.6.18-194.17.4.el5 #1 SMP with latest LSI driver v. 7.00 datet Jul 27) and Seagate Cheetah ST3600057SS drives. These drives have a std write performance (sustained) of > 200MB/s (and read as well); with Fedora core 13 (same machine), issuing a dd if=/dev/zero of=/dev/sdo bs=1024k count=16384 (16 GB direct device write), gets normally to 213 MB/s (repeated retries). On Centos 5.5 I am getting speeds around 110/113 MB/s. iostat does not show anything specific (just 1.3 % wait, CPU 99.7 idle). There are 14 drives: tried with several of them, same figures. Reads go around 200 MB/s.
I've got 4 identical 1 TB drives and would like to use them in a software RAID configuration on my home server. I'm running Debian Linux using 'mdadm' utility to manage the software RAID. I don't know how much I've read is fact or dated or even false so I decided I would ask here to get help from people who know more about this than I do. This is essentially just a file server machine to store all my data so being that I've got four identical SATA hard drives, I was thinking about doing RAID level 5. I guess I'll start here and ask if that is the recommended level of RAID. I think RAID level 5 will be fine for my general server usage. My second issue is partitioning the four individual drives to get maximum performance / space from them. Basically just asking here how would you or you recommend I partition the drives? I was thinking about doing three seperate partitions per drive:
/dev/sda1 = 4 GB (swap)/dev/sda2 = 1 GB (/boot)/dev/sda3 = 995 GB (/Now from that partition schema above, obviously all the types will be 'fd' for RAID and the partition for /boot is going to be bootable. My confusion is that I read Grub doesn't support booting from RAID 5 since Grub can't handle disk assembly. If /dev/sdx2 (sda2, sdb2, sdc2, sdd2) are partitioned for /boot (bootable), how would you guys configure this RAID to match up equally? I don't think I do a RAID level 1 on 4 identical partitions, right?
I have Ubuntu 10.04 LTS. I have a script that prompts you to insert a USB HDD/memory stick then it needs to determine its serial and model and save it to a file. The script can determine the device file of the inserted drive(ex: /dev/sdc) and if hdparm -I /dev/sdc works then all is ok. The problem is that not all USB HDDs/memory sticks work with hdparm. When it does not work I get the following:
The other command to get the serial for USB devices of any kind is lsusb -v or lsusb -D /path/to/bus/address The problem of the first one(lsusb -v) is that it lists all USB devices and there's no way for me to script it to detect the inserted USB drive/memory stick and get the serial/model for it. The problem with the second one(lsusb -D /path/to/bus/address) is that I do not know how to get the correct USB bus address of the inserted USB drive/memory stick.
Is there any other way to get the serial number/model of a USB HDD/memory stick using its /dev/sdc device file? Is there any way to link the /dev/sdc device file with the /dev/bus/usb/address device file so I can then use the lsusb -D command to get the required info?
I installed on a Intel SR2612UR Server with integrated LSI Expander and a Adaptec 5805Z controller a Raid 6 Array with 12x 2TB Drives (2 ext4 Partition: 16TB, 2TB), but I get very poor read speed depending on the blocksize that I set it vary between 130 MB/s and maximally 170 MB/s which is really poor for a Raid 6 Array of 12 drives, on a similar Dell Server I get above 650 MB/s. The Adaptec Controller has the latest Firmware installed.
I need to write a script which will get a number from STDIN and then with that number echo a set number of questions (its for a firewall config). Heres what I want the user to receive.
Compared to my laptop notebook with a HD of 5400rpm, the write performance of raid1 on an ubuntu lucid server is unacceptable. In the begining, I installed ubuntu 9.04 server(alternate) using raid1 with two WD 1TB HDs of 7200rpm(Green Power) and then performed dist upgrade to 9.10 and then to 10.04.
I guess the write performance initially was reasonable since the installation and data migration(copy from another computer over LAN) didn't take too much time. However, after upgrading the server to 9.10 or so, I found large file upload through samba or ftp tends to block and time out. It is of no use whether to change the daemon or the client program so that I tried to test the read/write performance on the server to figure out the situation.
To my surprise, using strace I found even a simple program like cp would easily get blocked eventually in a write() system call for decades of seconds. Hence, I perform another disk writing test using dd for data size ranging from 50MB to 1GB. Performance test commands are listed as follows:
if the data to write is equal or fewer than 150MB, the command returns immediately at very hight speed but the raid disks starts to sync and busy so that the terminal prompt seems to freeze. I think this behavior is normal under the raid1 configuration, isn't it?
But when the data size is equal to 200MB, the test command blocks for seconds and the write speed is measured at about 16.6MB/s. Of course, the raid disk still starts to sync and busy afterwards. Next, I test writing with data of size 1GB. The command blocks so long for about 770 seconds(<2MB/s) while the same test runs for only 17.49 seconds(60MB/s) on my laptop.
I also burn a Lucid LiveCD to boot the server and mount the raid device to run the test again but the results remain similar. Does that means even I re-install the system on the raid, the problem never disappears?
PS: the disks run under the mode of UDMA6 without change.
I am attempting to upgrade a system from 4.7 to 5.2 using a (now) DVD drive attached to the onboard IDE. Originally I had tried using a remote NFS image and a USB stick but I thought maybe there was a problem with the image. I can get up to the point of the installation of selecting the keyboard for the system and then it freezes and never goes any further. It doesn't appear to be a kernel panic since I can still switch between consoles.
I've got an MSI K9NGM2-FID with 14 drives in it. It serves as a file server for our backup server. It's got a secondary 4 port Silicon Image SII 3114 SATA card using the sata_sil module, and an old IDE Promise FastTrak TX2000. Technically I could have 16 drives but the 750W PS is walking the fine line on tripping it's self-breaker with the 14 drives and 7 fans. I would like to NOT have to disconnect all of this to do the upgrade.
I thought maybe that running the install using the "noprobe" option would help so it didn't detect and load the modules for the Silicon Image or the Promise cards and detect all of the drives but it still gets stuck on the step after selecting the keyboard. The installation info console and the dmesg console don't really provide any useful information. The installation console says:
INFO : moving (1) to step welcome INFO : moving (1) to step language INFO : moving (1) to step keyboard INFO : moving (1) to step findrootparts
And the last lines of the dmesg console says:
<6>device-mapper: multipath: version 1.0.5 loaded <6>device-mapper: multipath round-robin: version 1.0.0 loaded <6>device-mapper: multipath emc: version 0.0.3 loaded
Is there a hidden "debug" option that will turn on a lot of extra logging?
So I recently set up a fedora 13 server using software raid. Let me go over the initial install and maybe that will help explain why I'm running into problems with one of the arrays. During installation I had only 2 disks in the equipment (WD 750GB each) Partitioned them thusly:
I have two SAS RAID controller cards in a Dell server in slots 2 & 3, both with an array hanging off them. I went to install a third card into slot 1, but then when it boots it says two of my sd's have bad magic number in the super-block and it wants me to create an alternative one, which I don't want to do. If i remove the new card, the server boots perfectly like it did before I added the new card. Is the new card trying to control stuff that isn't hooked up to it because its in slot 1, so its confusing RHEL?
I was able to read and write to my USB thumb drives until I mounted a USB thumb drive with an autoinstaller, and every since then all of my flash drives show up with the owner as "root". I've tried changing the owner, and I'm not allowed- even with using sudo chown.
I don't get an error with that, but the owner isn't changed. I can read from the drives but not write to them. (I found that I can copy to the flash drive by using sudo cp, but I want to get away from command line stuff. I have too much to do to be constantly dropping into terminal mode!)I had no problems until the very minute I mounted that flash drive.I've read a few threads, and found other people with symptoms similar to mine, but when I try the fixes suggested, they didn't work.I've also not been able to locate anything that the flash drive autoinstaller changed or put in, but I must admit that I still am a relative newbie to Linux (been around computers since the early 70's, around Windows since 92-93, around OS/2 94-96, and Linux for only a year or so- and haven't learned that much command line stuff yet!).
I just did a fresh install of Fedora 11 and added Raid 1 following this tutorial:[URL].. Now I see the filesystem when I open 'Computer' in the GUI, and I open it and see 'lost+found', but i can't write to the drive. The option is simply greyed out. And when I view Properties on the drive and go to Permissions, it says 'The permissions of {driveid} could not be determined.'
I have a RAID 1 that is mounted and working. But for some reason I can also see the individual drives under gnome Devices on gnome-shell. Is there a way to hide them from gnome or linux in general. (So only the raid 1 can be seen)
I was recently given two hard drives that were used as a raid (maybe fakeraid) pair in a windows XP system. My plan was to split them up and install one as a second HD in my desktop, and load 9.10 x64 on it, and use the other for mythbuntu 9.10. As has been noted elsewhere, the drives aren't recognized by the 9.10 installer, but removing dmraid gets around this, and installation of both ubuntu and mythbuntu went fine. On both systems after installation however, the systems broke during update, giving an "read-only file system" error and no longer booting.
Running fsck from the live cd gives the error: fsck: fsck.isw_raid_member: not found fsck: Error 2 while executing fsck.isw_raid_member for /dev/sdb and running fsck from 9.04 installed on the other hard drive gives an error like:
The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device>
In both cases I setup the drives with the ext4 filesystem. There's probably more that I'm forgetting... it seems likely to me that this problem is due to some lingering issue with the RAID setup they were in. I doubt its a hardware issue since I get the same problem with the different drives in different boxes.
I want to make a RAID5 array with 4 2TB hard drives. One of the drives is full of data so I will need to start with a 3 disks and then once I copy the data from the 4th onto the array, I will then add the 4th drive. This will be my first experience with RAID. I've spent a few hours searching for info but most of what I have found is a bit over my head.
I have recently installed a Asus M4A77TD Pro system board which supports raid.
I have 2 x 320gb sata drives I would like to setup raid-1 on. so far i have configured the bios to raid-1 for drives, but when installing Ubuntu 10.04 from the cd it detects the raid configuration but fails to format.
When I re-set all bios settings to standard sata drives ubuntu installs and works as normal but i have just 2 x drives without any raid options. I had this working in my previous setup but thats because i had the o/s on a sepreate drive from the raid and was able to do this within Ubuntu.
I've got a 10.10 installation, which I am using as a media/download server. Currently everything is stored on a 1TB USB drive.With the costs of disks falling, and the hassle of trying to back 1TB up to DVD (no, it's not going to happen) I was wondering if there's some linux/Ubuntu utility, which can use multiple disks to provide failover/resilience ... Could I just buy another 1TB drive, and have it "shadowing" the main, so that if one goes, I buy another, and then restore from the copy ?