I am currently running Debian Squeeze on a headless system mainly used for backup.It has a RAID-1 made up of two 1TB SATA-Disks which can transfer about 100 MB/s reading and writing. Yesterday I noticed that one of the disks was missing from the RAID configuration. After re-adding the drive and doing a rebuild (ran with 80-100MB/s)
Have Lian Li Ex 503 External Raid System, using 4x2TB, using Raidmode 10 for good performance [ Just for those who are interested: http://www.lian-li.com/v2/en/product/pr ... ex=115&g=f ]
[Code]...
But using e-sata my transfer rates are very low (from internal drive to external ex503), around 60-70mb/sec But hdparm tells me:
I have a low power machine I use as an SFTP server.It currently contains two raid 1 arrays, and I am working on adding a third. However, I'm having a bit more trouble with this array than I did the prior arrays.My suspicion is that I have a bad drive, I am just not sure how to confirm it.I have successfully formatted both drives with EXT3 and performed disk checks on both which did not indicate a problem.
I can see it progressing in the blocks count, but it's incredibly slow. In the course of 5 minutes it progressed from 1024/1953511936 to 1088/1953511936.Checking top, not even 10% of my CPU is being used. Are there any other performance items I could check that could be affecting this?
I have ClearOS (CentOS) installed. I have 2 x 2TB SATA HDDs (hda & hdc). At installation time, I configured a RAID 1 (and LVM) between the two HDDs. After a power problem happened, the two HDD were re-syncing and I checked it using: watch cat /proc/mdstat The speed didn't exceed 2100 KB/s I tried the following with no change:
i have setup a software raid with mdadm. It consists of 4 hdds (1 samsung hd203wi and 3 * hd204ui) each with 2tb. When im doing benchmarks with bonnie++ or hdparm i get about 60mb/s write speed and 70mb/s read speed. Each single drive from the array has a read speed of > 100mb/s when testet with "hdparm -t".Im using opensuse 11.4 x64 with the latest patches from the update repositories. Im using the 2.6.37.6-0.7-desktop kernel.I have 4gb ram and an atom d525.
As the title says, my raid system is very very slow (this is for raid5 6x1Tb SAMSUNG HD103UJ): Code: leo@server:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 1094 MB in 2.00 seconds = 546.79 MB/sec Timing buffered disk reads: 8 MB in 3.16 seconds = 2.53 MB/sec It's impossible and I've tried every configuration (raid5,raid0,Pass Through), and I become nearly exactly the same speed (with restarts, so I'm sure I'm really talking with the volumes I've defined).
One can compare it with the system drive : Code: leo@server:~$ sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 1756 MB in 2.00 seconds = 878.12 MB/sec Timing buffered disk reads: 226 MB in 3.01 seconds = 75.14 MB/sec
Some hardware/software infos : Code: Raid card Controller Name ARC-1230 Firmware Version V1.48 2009-12-31
Code: Motherboard Manufacturer: ASUSTeK Computer INC. Product Name: P5LD2-VM DH
Code: leo@server:~$ uname -a Linux server 2.6.31-20-server #58-Ubuntu SMP Fri Mar 12 05:40:05 UTC 2010 x86_64 GNU/Linux
Code: IDE Channels ..... I'm a bit lost now. I could change the motherboard, or some bios settings.
I've just finished setting up a RAID 1 on my system. Everything seems to be okay, but I have a very slow boot time. It takes about three minutes between the time I select Ubuntu from GRUB and the time I get to the login screen.
I found this really neat program called bootchart which graphically displays your boot process.
This is my first boot (after installing bootchart). I'm not an expert at reading these, but it appears there are two things holding up the boot, cdrom_id and md_0_resync. I tried unplugging my CD drive SATA cable, and this is the new boot image.
It's faster, but it still takes about a minute, which seems pretty slow on this system. The md0 RAID device is my main filesystem. Is it true that it needs to get resynced on each boot?
I'm not sure how to diagnose my CD drive issue. The model is a NEC ND-3550A DVD RW drive. I should also note that there's a quick error message at startup about the CD rom. It's too quick for me to read it, just one line on a black screen saying "error: cdrom something something".
I have recently migrated my file server over to a HP Microserver. The server has two 1TB disks, in a software RAID-1 array, using MDADM. When I migrated simply moved the mirrored disks over, from the old server Ubuntu 9.10 (server) to the new one 10.04.1 (server).I Have recently noticed that write speed to the RAID array is *VERY* slow. In the order of 1-2MB/s order of magnitude (more info below). Now obviously this is not optimal performance to say the least. I have checked a few things, CPU utilisation is not abnormal (<5%) nor is memory / swap. When I took a disk out and rebuilt the array, with only one disk (tried both) performance was as to be expected (write speed >~70MB/s) The read speed seems to be unaffected however!
I'm tempted to think that there is something funny going on with the storage subsystem, as copying from the single disk to the array is slower than creating a file from /dev/zero to the array using DD..Either way I can't try the array in another computer right now, so I though I was ask to see if people have seen anything like this!At the moment I'm not sure if it is something strange to do with having simply chucked the mirrored array into the new server, perhaps a different version of MDADM? I'm wondering if it's worth backing up and starting from scratch! Anyhow this has really got me scratching my head, and its a bit of a pain! Any help here would be awesome, e-cookies at the ready! Cheers
I have a 4 drive RAID 5 array set up using mdadm. The system is stored on a seperate physical disk outside of the array. When reading from the array its fast but when writing to the array its extremely slow, down to 20MB/Sec compared to 125MB/Sec reading. It does a bit then pauses, then writes a bit more and then pauses again and so on.The test i did was to copy a 5GB file from the RAID to another spare non-raid disk on the system average speed 126MB/s. Copying it back on to the RAID (in another folder) the speed was 20MB/s.The other thing is very slow several KB/s write speed copying from eSATA drive to the RAID.
I set up software RAID 5 on three 7200rpm Seagate Barracudas (2x80GB, 1x40GB) with 40 GB partitions on the beginning of all three disks.Now, when I do benchmarks using the Disk Utility I get about 52 MB/s read speed on the array. This seems kinda slow considering the average read speed on each individual drive is about 48 MB/s. Also, boot up takes about 30 seconds, which is the same as when I had Ubuntu installed on just one of the 'cudas, without RAID. Now, I am I missing something? Shouldn't the read speed be more that twice as fast since it can read from all three disks?
Recently did an apt update and upgrade to my CLI only Lenny server. Upon reboot I get an "ATA softreset failed (device not ready)" for all of my SATA drives. I noticed the upgrade changed the kernel to "Linux debian 2.6.26-2-amd64" (do have 64bit CPU).Once loaded to a command prompt I can assemble my raid 6 array with the command "mdadm --assemble /dev/sda to sdd" then mount it with mount -a. But transfers to the array areorribly slow ~1mbs.Upon reboot i get the same errors and have to assemble my array every time
I've been, for some years, an happy user of Highpoint HPT374 based RAID cards, using RAID 5 with decent performances (constantly around 90 MB/sec in read, and 60 MB/sec in write ). Old Athlon mobo , with 2.6.8 kernel. Now the mobo is dead, so I've got an Asrock A330GC (dual proc with 4giga ram), installed Debian with kernel 2.6.26 and moved the controller and disks , and the performances have dropped to a painful 9 MB/sec in write, measured with dd of a huge file (interrupted with kill -USR1 [pidof dd]).
Read performances are still around 90MB/sec, using hdparm -t , repeated many times , figures are constantly around 90MB/sec. I suspect some libata issue , in old kernel the raid was seen as hdb, now is sdb, some driver(s) of the PATA disks may be responsible. I've used the driver, from Highpoint v2.19 , the latest driver is broken (causes kernel oops during format of raid), I've informed Highpoint.
Here some info : hdparm -i /dev/sdb : HDIO_GET_IDENTITY failed: invalid argument .
Could any RAID gurus kindly assist me on the following RAID-5 issue?I have an mdadm-created RAID5 array consisting of 4 discs. One of the discs was dropping out, so I decided to replace it. Somehow, this went terribly wrong and I succeeded in marking two of the drives as faulty, and the re-adding them as spare.
Now the array is (logically) no longer able to start:
mdadm: Not enough devices to start the array.Degraded and can't create RAID ,auto stop RAID [md1]
Code: mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 As I don't want to ruin the maybe small chance I have left to rescue my data, I would like to hear the input of this wise community.
I have a 6 disk mdadm RAID 10 array of Western Digital RE3 drives. IOZone and all the other tools I have tried are showing pretty lackluster reads for a six drive RAID 10 array, in the area of 200MB/s. Writes are closer to what I'd expect in that area, around 340MB/s. These drives fly with an Adaptec RAID controller, but as soon as I stick them in an mdadm array, the reads just aren't what I'd expect.
I have a small SDD system disk and a large data disk on a server based on fedora core. It is organized this way so that the data disk can be spun down most of the time, reducing noise and heat.I'd like some way to periodically create a bootable backup of the system disk on the data disk, so that if the SDD goes belly-up, I lose minimal data and can very quickly bring the system up.Some thoughts:1. I know I can create a partition on the large disk and make the system boot from a raid0 mirror.However, the constant writes to the system disk (e.g. by the journal daemon) will ensure it never spins down.2. I can do a periodic format and cp -ax to the "copy of system" partition, but that doesn't leave the disk bootable because it needs changes to the /etc/fstab, and possibly a mkinitrd to become bootable.3. Or something like boot from a TFS (translucent filesystem) where base layer would be the raid, and the writable layer would be on the SDD, with periodic pushes of the writable part to the base layer (I'm not sure this is even possible).
4. In a virtual world this could be managed by appropriate location and use of snapshot images, but I'm not in a virtual world - as far as I know :0).Ideally I'd like something like raid that only brought the copy of system disk into sync with the running system infrequently (perhaps every few hours). Can anybody suggest how to do this?
I got two harddisks, sda and sdb. Is it possible to install Debian root into software raid partitions sda2 and sdb1 leaving all other partitions 'normal' (not-raid)? do partitions sda2 and sdb1 need to be exact same size and position?
Someone asked a long time ago why software RAID is so much slower than hardware RAID. The answer was that software RAID used processor time, and too much of it was needed.But what if DMA is used, and what if it's just RAID-0? Doesn't DMA make processing power much less relevant in this case, especially given today's superfast processors? Given DMA, why is software RAID-0 still so slow, compared to hardware RAID-0?
I am looking to build a server with 3 drives in RAID 5. I have been told that GRUB can't boot if /boot is contained on a RAID arrary. Is that correct? I am talking about a fakeraid scenario. Is there anything I need to do to make it work, or do I need a separate /boot partition which isn't on the array?
We've started using Debian based servers more and more in work and are getting the hang of it more and more every day. Right now I'm an ace at setting up partitions, software RAID and LVM volumes etc through the installer, but if I ever need to do the same thing once the system's up and running then I become unstuck.
Is there any way I can get to partman post-install, or any similar tools that do the same thing? Or failing that are there any simple guides to doing these things through the various command line tools?
I installed mdadm fine and all and proceeded to run:mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sda /dev/sdbWith sda being my primary hard drive, and sdb being the secondary.I get this error message upon running the command"mdadm: chunk size defaults to 64Kmdadm: Cannot open /dev/sda: Device or resource busymdadm: create aborted"I don't know what's wrong!
I have a raid5 on 10 disk, 750gb and it have worked fine with grub for a long time with ubuntu 10.04 lts. A couple of days ago I added a disk to the raid, growd it and then resized it.. BUT, I started the resize-process on a terminal on another computer, and after some time my girlfriend powered down that computer! So the resize process cancelled in the middle and i couldn't acess any of the HDDs so I rebooted the server.
Now the problem, the system is not booting up, simple black with a blinking line. Used a rescue CD to boot it up, finised the resize-process and the raid seems to be working fine so I tried to boot normal again. Same problem. Rescue cd, updated grub, got several errors: error: unsupported RAID version: 0.91. I have tried to purge grub, grub-pc, grub commmon, removed /boot/grub and installed grub again. Same problem.
I have tried to erased mbr (# dd if=/dev/null of=/dev/sdX bs=446 count=1) on sda (ide disk, system), sdb (sata, new raid disk). Same problem. Removed and reinstalled ubuntu 11.04 and is now getting error: no such device: (hdd id). Again tried to reinstall grub on both sda and sdb, no luck. update-grub is still generating error about raid id 0.91 and is back on a blinking line on normal boot. When you'r resizeing a raid MDADM changed the ID from 0.90 to 0.91 to prevent something that happend happened. But since I have completed the resize-process MDADM have indeed changed the ID back to 0.90 on all disks.
I have also tried to follow a howto on a similar problem with a patch on [URL] But I cant compile, various error about dpkg. So my problem is, I cant get grub to work. It just gives me a blinking line and unsupported RAID version: 0.91.
How long does hardware Raid card (raid 1, 2 drives)take to mirror a 1 TB drive (500gb used)?Is there a general rule of thumb for this?4 hours? 12 hours? 24 hours?
I had a RAID problem on an HP Proliant server due to a failing disk. When I changed the disk, things got complicated and RAID seemed to be broken. I put back the old disk, repaired the RAID, then changed put the new disk again and all returned to normal except the system doesn't boot. I am stuck at grub stage (the grub rescue prompt). I grabbed a netinst CD and tried to rescue it, at some point, the wizard correctly sees my two partitions sda1 and sda2 and asks wether I want to chroot to sda1 or sda2. I had red screen errors on both.
The error message said to check syslog, syslog says can't mount EXT4 file system beacuse bad superblock. I switched to TTY2 (Alt-F2) and tried fsck.ext4 on sda1 (I think sda2 is the swap because when I ran fsck on it it said something like this partition is too small and suggested that it could be swap) it says bad superblock and bad magic number. I tried e2fsck -b 8193 as suggested by the error message but that too didn't work (I think the -b 8193 is for trying to get he backup superblock).
The RAID is as follows : One RAID array of 4 physical disks that are grouped into one Logical Volume /dev/sda, so the operating system only sees one device instead of four (4 disks).
After months of using Lenny & Lucid Lynx without issues, I come back to the good existential questions.
I'd like a completely encrypted disk (/ and swap) in addition to the Xp partitions (not that safe but I'll switch completely to Linux once I have solved everything.
1. I create an ext4 partition for /boot 2. One other (/dev/sda7) that I set for encryption, 3. On top of that, I create a PV for lvm2, 4. I add to a VG, 5. I create / & swap in the VG.
However, if I add a hard drive, I will have to encrypt the main partition, add it to the VG & then expand /. So I'll need 2 passwords at boot time to decrypt.
So I'd like to:
-Encrypt the VG directly, it would solve everything but no device file appears for the VG, only the PV and th LV.
-After hours of search, I couldn't find a solution for a single password...
Maybe a hope with a filesystem like btrfs in the future providing encryption, but I'll still have to create a swap partition out of it (or create a file for swap but no hibernation possible).
My system includes two 120GB disks in fake raid-0 setup. Windows vista is installed on these.For Debian I bought a new 1 TB disk. My mission was to test Debian and I installed it to the new disk. The idea was to remove the disk afterwards and use windows as it was before. Everything went fine. Debian worked perfectly but when I removed the 1 TB disk from system grub will show up in boot in grub recovery mode.
Is my RAID setup now corrupted? Grub seems to be installed on the other raid disk? Did grub overwrite some raid metadata? Is there any way to recover the raid setup?
dmraid -ay: /dev/sdc: "pdc" and "nvidia" formats discovered (using nvidia)! ERROR: nvidia: wrong # of devices in RAID set "nvidia_ccbdchaf" [1/2] on /dev/sdc ERROR: pdc: wrong # of devices in RAID set "pdc_caahedefdd" [1/2] on /dev/sda ERROR: removing inconsistent RAID set "pdc_caahedefdd" RAID set "nvidia_ccbdchaf" already active ERROR: adding /dev/mapper/nvidia_ccbdchaf to RAID set RAID set "nvidia_ccbdchaf1" already active
I have created a system using four 2Tb hdd. Three are members of a soft-raid mirrored (RAID1) with a hot spare and the fourth hdd is a lvm hard drive separate from the RAID setup. All hdd are gpt partitioned.
The RAID is setup as /dev/md0 for mirrored /boot partations (non-lvm) and the /dev/md1 is lvm with various logical volumes within for swap space, root, home, etc.
When grub installs, it says it installed to /dev/sda but it will not reboot and complains that "No boot loader . . ."
I have used the supergrubdisk image to get the machine started and it finds the kernel but "grub-install /dev/sda" reports success and yet, computer will not start with "No boot loader . . ." (Currently, because it is running, I cannot restart to get the complete complaint phrase as md1 is syncing. Thought I'd let it finish the sync operation while I search for answers.)
I have installed and re-installed several times trying various settings. My question has become, when setting up gpt and reserving the first gigabyte for grub, users cannot set the boot flag for the partition. As I have tried gparted and well as the normal Debian partitioner, both will NOT let you set the "boot flag" to that partition. So, as a novice (to Debian) I am assuming that "boot flag" does not matter.
Other readings indicate that yes, you do not need a "boot flag" partition. "Boot flag" is only for a Windows partition. This is a Debian only server, no windows OS.
I installed Debian 5.0.3 (Backport with .34 Kernel), because my server hardware (Dell PowerEdge R210) needs special firmware and drivers.However, the installation went quite smooth.I put the system on a RAID 1 Array with about 500 GB space.s I said the installation went well, however, it doesn't boot! No GRUB, nothing