I have a 6 disk mdadm RAID 10 array of Western Digital RE3 drives. IOZone and all the other tools I have tried are showing pretty lackluster reads for a six drive RAID 10 array, in the area of 200MB/s. Writes are closer to what I'd expect in that area, around 340MB/s. These drives fly with an Adaptec RAID controller, but as soon as I stick them in an mdadm array, the reads just aren't what I'd expect.
I have a fileserver running 10.04 server 64bit and samba. I connect it to my desktop which is 10.04 desktop 64bit.I have the server mounted on my desktop in fstab as://10.0.0.2/share /media/share cifs guest, uid= 1000.Up until 30 June 2010 it was all fine. Now when I write the server it is very slow e.g. 2Mbps though when I read I get >100Mbps so I think my network is still ok. If i use nautilus smb://10.0.0.2/share I can write at >100Mbps and also read at >100Mbps...So any ideas why the write speed via the fstab mount samba has started to go really slowly in the last couple of days?
Is there a way to determine the IO size that is being used for reads and writes to an attached storage device? I am trying to pattern the IO sequences to storage. I have seen mentions to max_sectors_kb but the notes indicated that changing this value did not change the IO size to the storage.
We are graphing various system parameters using Cacti. One of our graphs shows hard drive reads and writes. A question came up: why do we need this graph?
Is there already a program that reads multiple pipes or file descriptors and writes to the standard output (not splitting lines).Like cat, but reading all files simultaneously and preserving lines.It is needed to avoid coding of select/epoll loops or using multithreading in simple programs. Like "select loop for bash".
I have two ntfs partitions I use to store music and data. I've been using them in all my linux boxes without any problems. Simply use Ntfs-3g with noatime and everything works great.
However, since the update to OpenSuse 11.3 writing to my NTFS partitions takes FOREVER. I've specified noatime, relatime and norelatime successively without success. The partitions have plenty of space and are defragmented.
When copying large files, It starts fast at first, but in the last hundred MB it slows down to about 1.5MB/s. Even after the transfer is supposedly done, the HD led remains on and all other read/write activity involving the partition is completely halted. This can take between 5 minutes to 10 or more depending on the size of the file. When copying several small files, (100 MB or less) it starts at about 1.5MB/s from the beginning.
I have the latest versions of fuse and ntfs-3g installed
I'm working on a project using CentOs 5.3 that uses a solid state drive (SSD) as the boot device. It is desired to configure it such that writes do not typically occur run-time but configuration files can be saved. There are 2 reasons we do not want writes to occur run-time. 1) Writes will wear out a SSD over time. 2) A system disruption during a write can cause a file system error.
I'm looking to stock my SuperMicro P8SCi with two 1-2 TB SATA hard discs, for running backups and web hosting. There are reviews of certain disks stating that the low-power disks will get kicked out of the Raid due to their slow response time, and it also appears that there have been quality problems with these newer disks, as if the race to size has lowered their reliability.
Can someone recommend a good brand and specific disks that you've had experience with? I'd rather not need to replace these after putting them in, but I also don't want to pay significantly more for an illusion of quality.
I have a connection an its slow in both ff and chrome...however i can ping sites and get the same response time as my xp machine which runs fast. usually around the 40ms mark with no lost packets.
Also when i do installs eg. of chrome etc, i was getting up to 500kB/s download. I just cant seem to replicate this browsing, its loading up at a dial up pace. The last time I had an issue like this was setting up my mums work laptop on our home network where the work had specific proxy settings, so we turn the off and it works fine.
I do not know how to tinker this
INFO:::: broadcom 4306 chip have tried ndiswrapper and bcmxx fwcutter
I am trying to connect a RAID Box to the server via LSI 8880EM2 RAID controller.The raid box is FUjitsu Externus DX60 with raid 1 configured.The server is Fujitsu Primergy SX300 S5 with LSI 8880EM2 RAID card.The external raid box is being recognised by the raid controllers bios.
The server runs CentOS 5.5 64bit. I have installed the megaraid_sas driver from LSI website and MegaCLI utility but CentOS still fails to see the box.MegaCLI utility, when launched from CentOS, recognises the raid box, CentOS does not(no mapping being created in /dev).
I have also tried to create a logical RAID0 HDD on one physical HDD(as seen by MegaCLI) with MegaCLI utility in CentOS.The result was success and the new logical drive could have been used, but when restarting the server, the controllers' bios fails with an error (not surprised(one logical RAID0 on one physical HDD)) and configuration is being erased.
Has anyone tried connecting 8880EM2 controller to a raid box with raid configured on the box, running it all under CentOS and what were the results.
I am going to be using CentOs 5.4 for a home storage server. It will be RAID6 on 6 x 1TB drives. I plan on using an external enclosure which is connected via two SFF-8088 cables (4 drives a piece). I am looking to try and find a non-RAID HBA which would support this external enclosure and allow to use standard linux software raid.
If this is not an option, I'd consider using a hardware based raid card, but they are very expensive. The Adaptec 5085 is one option but is almost $800. If that is what I need for this thing to be solid then that is fine, I will spend the money but I am thinking that software raid may be the way to go.
I have a low power machine I use as an SFTP server.It currently contains two raid 1 arrays, and I am working on adding a third. However, I'm having a bit more trouble with this array than I did the prior arrays.My suspicion is that I have a bad drive, I am just not sure how to confirm it.I have successfully formatted both drives with EXT3 and performed disk checks on both which did not indicate a problem.
I can see it progressing in the blocks count, but it's incredibly slow. In the course of 5 minutes it progressed from 1024/1953511936 to 1088/1953511936.Checking top, not even 10% of my CPU is being used. Are there any other performance items I could check that could be affecting this?
I am currently running Debian Squeeze on a headless system mainly used for backup.It has a RAID-1 made up of two 1TB SATA-Disks which can transfer about 100 MB/s reading and writing. Yesterday I noticed that one of the disks was missing from the RAID configuration. After re-adding the drive and doing a rebuild (ran with 80-100MB/s)
I have ClearOS (CentOS) installed. I have 2 x 2TB SATA HDDs (hda & hdc). At installation time, I configured a RAID 1 (and LVM) between the two HDDs. After a power problem happened, the two HDD were re-syncing and I checked it using: watch cat /proc/mdstat The speed didn't exceed 2100 KB/s I tried the following with no change:
Have Lian Li Ex 503 External Raid System, using 4x2TB, using Raidmode 10 for good performance [ Just for those who are interested: http://www.lian-li.com/v2/en/product/pr ... ex=115&g=f ]
[Code]...
But using e-sata my transfer rates are very low (from internal drive to external ex503), around 60-70mb/sec But hdparm tells me:
i have setup a software raid with mdadm. It consists of 4 hdds (1 samsung hd203wi and 3 * hd204ui) each with 2tb. When im doing benchmarks with bonnie++ or hdparm i get about 60mb/s write speed and 70mb/s read speed. Each single drive from the array has a read speed of > 100mb/s when testet with "hdparm -t".Im using opensuse 11.4 x64 with the latest patches from the update repositories. Im using the 2.6.37.6-0.7-desktop kernel.I have 4gb ram and an atom d525.
As the title says, my raid system is very very slow (this is for raid5 6x1Tb SAMSUNG HD103UJ): Code: leo@server:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 1094 MB in 2.00 seconds = 546.79 MB/sec Timing buffered disk reads: 8 MB in 3.16 seconds = 2.53 MB/sec It's impossible and I've tried every configuration (raid5,raid0,Pass Through), and I become nearly exactly the same speed (with restarts, so I'm sure I'm really talking with the volumes I've defined).
One can compare it with the system drive : Code: leo@server:~$ sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 1756 MB in 2.00 seconds = 878.12 MB/sec Timing buffered disk reads: 226 MB in 3.01 seconds = 75.14 MB/sec
Some hardware/software infos : Code: Raid card Controller Name ARC-1230 Firmware Version V1.48 2009-12-31
Code: Motherboard Manufacturer: ASUSTeK Computer INC. Product Name: P5LD2-VM DH
Code: leo@server:~$ uname -a Linux server 2.6.31-20-server #58-Ubuntu SMP Fri Mar 12 05:40:05 UTC 2010 x86_64 GNU/Linux
Code: IDE Channels ..... I'm a bit lost now. I could change the motherboard, or some bios settings.
I've just finished setting up a RAID 1 on my system. Everything seems to be okay, but I have a very slow boot time. It takes about three minutes between the time I select Ubuntu from GRUB and the time I get to the login screen.
I found this really neat program called bootchart which graphically displays your boot process.
This is my first boot (after installing bootchart). I'm not an expert at reading these, but it appears there are two things holding up the boot, cdrom_id and md_0_resync. I tried unplugging my CD drive SATA cable, and this is the new boot image.
It's faster, but it still takes about a minute, which seems pretty slow on this system. The md0 RAID device is my main filesystem. Is it true that it needs to get resynced on each boot?
I'm not sure how to diagnose my CD drive issue. The model is a NEC ND-3550A DVD RW drive. I should also note that there's a quick error message at startup about the CD rom. It's too quick for me to read it, just one line on a black screen saying "error: cdrom something something".
I have recently migrated my file server over to a HP Microserver. The server has two 1TB disks, in a software RAID-1 array, using MDADM. When I migrated simply moved the mirrored disks over, from the old server Ubuntu 9.10 (server) to the new one 10.04.1 (server).I Have recently noticed that write speed to the RAID array is *VERY* slow. In the order of 1-2MB/s order of magnitude (more info below). Now obviously this is not optimal performance to say the least. I have checked a few things, CPU utilisation is not abnormal (<5%) nor is memory / swap. When I took a disk out and rebuilt the array, with only one disk (tried both) performance was as to be expected (write speed >~70MB/s) The read speed seems to be unaffected however!
I'm tempted to think that there is something funny going on with the storage subsystem, as copying from the single disk to the array is slower than creating a file from /dev/zero to the array using DD..Either way I can't try the array in another computer right now, so I though I was ask to see if people have seen anything like this!At the moment I'm not sure if it is something strange to do with having simply chucked the mirrored array into the new server, perhaps a different version of MDADM? I'm wondering if it's worth backing up and starting from scratch! Anyhow this has really got me scratching my head, and its a bit of a pain! Any help here would be awesome, e-cookies at the ready! Cheers
I have a 4 drive RAID 5 array set up using mdadm. The system is stored on a seperate physical disk outside of the array. When reading from the array its fast but when writing to the array its extremely slow, down to 20MB/Sec compared to 125MB/Sec reading. It does a bit then pauses, then writes a bit more and then pauses again and so on.The test i did was to copy a 5GB file from the RAID to another spare non-raid disk on the system average speed 126MB/s. Copying it back on to the RAID (in another folder) the speed was 20MB/s.The other thing is very slow several KB/s write speed copying from eSATA drive to the RAID.
I set up software RAID 5 on three 7200rpm Seagate Barracudas (2x80GB, 1x40GB) with 40 GB partitions on the beginning of all three disks.Now, when I do benchmarks using the Disk Utility I get about 52 MB/s read speed on the array. This seems kinda slow considering the average read speed on each individual drive is about 48 MB/s. Also, boot up takes about 30 seconds, which is the same as when I had Ubuntu installed on just one of the 'cudas, without RAID. Now, I am I missing something? Shouldn't the read speed be more that twice as fast since it can read from all three disks?
Recently did an apt update and upgrade to my CLI only Lenny server. Upon reboot I get an "ATA softreset failed (device not ready)" for all of my SATA drives. I noticed the upgrade changed the kernel to "Linux debian 2.6.26-2-amd64" (do have 64bit CPU).Once loaded to a command prompt I can assemble my raid 6 array with the command "mdadm --assemble /dev/sda to sdd" then mount it with mount -a. But transfers to the array areorribly slow ~1mbs.Upon reboot i get the same errors and have to assemble my array every time
I've been, for some years, an happy user of Highpoint HPT374 based RAID cards, using RAID 5 with decent performances (constantly around 90 MB/sec in read, and 60 MB/sec in write ). Old Athlon mobo , with 2.6.8 kernel. Now the mobo is dead, so I've got an Asrock A330GC (dual proc with 4giga ram), installed Debian with kernel 2.6.26 and moved the controller and disks , and the performances have dropped to a painful 9 MB/sec in write, measured with dd of a huge file (interrupted with kill -USR1 [pidof dd]).
Read performances are still around 90MB/sec, using hdparm -t , repeated many times , figures are constantly around 90MB/sec. I suspect some libata issue , in old kernel the raid was seen as hdb, now is sdb, some driver(s) of the PATA disks may be responsible. I've used the driver, from Highpoint v2.19 , the latest driver is broken (causes kernel oops during format of raid), I've informed Highpoint.
Here some info : hdparm -i /dev/sdb : HDIO_GET_IDENTITY failed: invalid argument .
I'm working on a server and noticed that the to RAID5 setup is showing 4 Raid devices but only 3 Total devices. It's on a fully updated CentOS 5 system that only has three SATA drives, as it can not hold anymore. I've done some researching but am unable to remove the fourth device, which is listed as removed. The full output of `mdadm -D /dev/md2` can be see below. I've never run into this situation before.Anyone have any pointers on how I can reduced the Raid Devices from 4 to 3? I have tried
The installer can't see my raid controller (I assume) as I'm getting the following error:"Error opening /dev/mapper/isw_jbhgjgjj_Vol0: No such device or address"It just sees them as 4 individual drives: sda, sdb, sdc and sdd.Please note that I have set up the RAID 5 in the controller bios interface and the image name is Vol0, which it seems that it tries to load but for some particular reason it can't.I have also tried different bios settings and nothing worked.
My problem is that I'm trying to install CentOS 5.4 x86_64 DVD ISO on Supermicro X7SBI server with installed Adaptec RAID 3405 controller.
I created RAID 5 array and is working fine (adaptec status says Optimal) but I can't install CentOS to that array (1.5TB size).
Whenever I try to install with: linux dd
I'm asked for a driver, which I have downloaded from Adaptec site and extracted contents to USB drive (in installation found as /sba1) which has now a lot of IMG and some ISO files on it.
I try to load (I simplified names) RHEL5.img, CENTOS.img... with x64 names (one exact name: aacraid driverdisk-CentOS-x86_64.img) and I always get the error message: "No devices of the appropriate type were found on this driver disk"
This is going on for a week now and I can't find the right driver or something I'm doing wrong to get install done.
I'm giving CentOS a new look as a desktop. Been a few years since I last installed it on anything but a server. While the default repositories have improved greatly I am still bereft of so much of my cherished and necessary software. No Gambas, Wesnoth (shows up but version is 2 years out of date), Chrome, Audacious ugly plugins, XMMS support for FLAC (shows up but errors out if I try to install), Cinalarra, Audacity, JACK audio, Ardour, Xine, Amarok, Blender, Gthumb, Abiword, gramps, disk utility, many KDE apps, it's almost like KDE is a forgotten desktop manager for CentOS repsositories and pages more of software I'm too lazy too list. All of this is software which shows up in Debian based distros and most of which in Fedora/SUSE repositories.
What are some good repositories for CentOS. I clearly have the wrong ones enabled or CentOS is a very crippled desktop. Here are the repositories I've added/have. Updates extras addons adobe base c5-media errors out so I have it disabled. centosplus contrib kbs-CentOS-extras & misc Rpmforge RPMforge extras errors out so I have it disabled.