I installed on a Intel SR2612UR Server with integrated LSI Expander and a Adaptec 5805Z controller a Raid 6 Array with 12x 2TB Drives (2 ext4 Partition: 16TB, 2TB), but I get very poor read speed depending on the blocksize that I set it vary between 130 MB/s and maximally 170 MB/s which is really poor for a Raid 6 Array of 12 drives, on a similar Dell Server I get above 650 MB/s. The Adaptec Controller has the latest Firmware installed.
My problem is that I'm trying to install CentOS 5.4 x86_64 DVD ISO on Supermicro X7SBI server with installed Adaptec RAID 3405 controller.
I created RAID 5 array and is working fine (adaptec status says Optimal) but I can't install CentOS to that array (1.5TB size).
Whenever I try to install with: linux dd
I'm asked for a driver, which I have downloaded from Adaptec site and extracted contents to USB drive (in installation found as /sba1) which has now a lot of IMG and some ISO files on it.
I try to load (I simplified names) RHEL5.img, CENTOS.img... with x64 names (one exact name: aacraid driverdisk-CentOS-x86_64.img) and I always get the error message: "No devices of the appropriate type were found on this driver disk"
This is going on for a week now and I can't find the right driver or something I'm doing wrong to get install done.
I just installed CentOS 5.4 on a brand new DL160 G6. In order to get the data raided I decided to go with mdraid. But I see quite harsh performance hits, I have no GUI or anything and just the basic server installation, and even the console feels sluggish.
The output from getinfo.sh disk:
==================== BEGIN uname -rmi ==================== 2.6.18-164.15.1.el5 x86_64 x86_64 ==================== END uname -rmi ====================
ok, we had to move one of our databases due to failing hardware. This box is newer than the old but it's just dog slow. 1st I thought it was mysql but now I am realizing it's the network. As a test, I tried copying a remote file to that box and the old server (located at the same co-lo, same provider), same switch, etc.Here are the results;
Old server: access_log.1 100% 4192KB 1.4MB/s 00:03
New Server: access_log.1 100% 4192KB 72.3KB/s 00:58
So now I see why people were complaining about webpage load times, etc. I can't figure out why the network latency. I looked around a bit, see below for some things, I thought mtu, nic speed, etc.
mii-tool shows; eth0: 100 Mbit, full duplex, link ok eth1: 100 Mbit, full duplex, link ok[code].....
Are there other things I can type to test or provide more feedback somehow to get more information.
I've just removed fedora 10 and installed CentOS 5.4 both x86_56 on my Lenovo ThinkCentre 9091-CTO workstation, and observed serious performance degradation while intensive I/O operations and huge load without significant CPU consumption in user mode with about 50% in I/o wait.For example if I do the "tar xjvf some_large_tar.bz" file and run vmstat i get the output below. This is the only cpu and io intensive process running. The load gets up to 6 on C2D CPU and the machine has unacceptable responsivness.
[root@f00 ~]# vmstat 5 10 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st
i'm trying to run ubuntu 8.10 intrepid ibex on an old hand-me-down laptop that i got from a family member, and i've noticed that the performance is actually much worse than it was on windows xp professional (32-bit).
first of all let me give you my specs: ibm thinkpad r32 intel mobile pentium 4-m 2.0 ghz ati radeon mobility m6 (i think this is actually the mobility radeon 7000 igp) with 16mb ram 256 mb ram (pc-2100)
i understand that these are very low performance specs, but it feels like the computer is running very sluggishly even for those specs. the computer ran just fine in windows xp, but is extremely sluggish in ubuntu. it seems like it detected all the hardware just fine, but i feel like it may not be utilizing the cpu and/or video to the extent that it should be. a couple of places where it really seems to struggle is when first opening new programs, viewing flash intensive web 2.0 type webpages (i.e. videos), and listening to music/podcasts (with rhythmbox).
it seems like rhythmbox really kills my performance more than anything else, but the performance is poor all around compared to xp, and i feel like that shouldn't be the case. like i said, i understand that the specs are poor, but it feels like it's not even living up to those specs. i know that 8.10 is old and outdated now, but it's what i had laying around and i figured that it might perform better on this old deprecated laptop than 9.10 would anyway. am i wrong in this assumption? would 9.10 perform better for me?
I am running karmic and have wireless performance which is terrible on the EEE 1000HA I see it as low as 1mbit at times and others at 24mbit and briefly at 48 and 54mbits. running an Internet speed test with wireless I get pathetic speeds as low as 15Kbps, Its 5 feet from the wireless router ... and other machines that are wireless in the same area show 54mbits and perform great ... any one solve this problem with the asus EEE1000HA running on karmic?
I just installed Ubuntu 10.04 on a laptop that was given to me. It is having a lot of lag issues,especially when playing videos and visiting websites using flash. It's an Intel Pentium 4 2.4GHz machine. I think that it should perform better because I had used and AMD Sempron 2.0GHz machine with Ubuntu before and video playback was much better than this. I'm thinking that my problem is the graphics card driver. Here is the card that this machine has:
Code: :~$ lspci | grep Radeon 01:05.0 VGA compatible controller: ATI Technologies Inc Radeon IGP 330M/340M/350M The AMD machine was using a VIA graphics chipset.
I used the command "Xorg -configure" to create a xorg.conf file, hoping that might help. But it didn't. Here it is:
Is there any tweaking that I can do in the xorg.conf that might help? I don't know which of those options I should set. I also tried changing the driver to "ati" but that did nothing. found some info for settings with "man radeon", I'm going to try some of those out.
I have an ATI RADEON 7000 (taken from lspci) 64 MB graphics card. My computer has a 700 MHz Celeron CPU and 650 MB RAM with Ubuntu 10.04. Anything graphical is slow. I cannot even play basic games like SuperTux. I think it is a graphics driver problem. Will this card ever work well with Linux, what do I need to do?
Am installing Fedora 10 on an old but good server that used to be a windows box. It has 2 sata disk raid level 1 on adaptec 2410SA card. Disks are clean no info even did a low level format and recreated RAID.. twice DVD boots and installs os on raid array and reports success. new volume appears under computer icon but cannot be mounted after reboot attempt reports Reading physical volumes. This may take a while... (it doesn't)
Volume group "VolGroup00" not found Unable to acces resume device (/dev/VolGroup00/LogVol01) mount: error mounting /dev/root on /ysroot as ext3: No such file or directory input: PS/2 Generic Mouse as /devices/platform/i8042/serio1/input/input4
and then a blinking cursor with keyboard echo but its just copying reboot with install DVD does not show previous (unmountable) disk image and repeat of above process increases operator need for alcohol have "disable write cache for drives" as I found that is a post but after fedora install.I do not know what grub is for instance but once linux is booted (on other computers) I can muddle through ok in terminal mode.
I was trying to update the drive for my adaptec raid controller. Unfortunately, Adaptec only provides RPM packages. So I converted the package using alien. After dpkg install, I then tried using dkms to build the module:
Code: root@atulsatom# dkms add -m aacraid -v 184.108.40.206400 Adding driver was successful, but I got some error during the build Code: root@atulsatom# dkms build -m aacraid -v 220.127.116.11400 Kernel preparation unnecessary for this kernel. Skipping.
I experience very poor desktop performance on my system. General clicking responsiveness is poor, taking over second or two to focus to a new window, move windows around, ... etc. The Xorg process constantly uses over 60% of the CPU even when I'm doing nothing, and when I do click around it will quickly go up near 100% usage. Performance is initially OK when I first launch into my Gnome environment, but it soon deteriorates within 30 minutes or so. Eventually, after days of being up and running, my desktop will crash.I am running Ubuntu 10.10 with latest upgrades. I have four monitors attached to my "nVidia Corporation G98 [Quadro NVS 420]" graphics card. I am running the "NVIDIA accelerated graphics driver (version current)" recommended in the Ubuntu "Additional Drivers" app. I get a fair number (hundreds) of these in my /var/log/Xorg.0.log:
though they don't appear constantly while my desktop is running (I assume they may come in batches?). Not sure if they're relevant. A separate issue, though possibly related, is the SpeedCrunch app (a desktop calculator) is guaranteed to crash my desktop as soon as it gets focus. Here's the relevant output of "lspci -v" which shows my graphics card:
03:00.0 PCI bridge: nVidia Corporation NF200 PCIe 2.0 switch for Quadro Plex S4 / Tesla S870 / Tesla S1070 / Tesla S2050 (rev a3) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=03, secondary=04, subordinate=06, sec-latency=0
So for the last, oh, three or four hours now, I've been trying to get videos to play properly in Ubuntu. I have the default player, I have vlc, I (apparently) installed mplayer, but it won't play anything at all.
I was having horrible screen tearing with standard definition .avi's in vlc, fixed that by setting vsync in catalyst control center to always on or whatever it is.
Now, I have tried to run a 720p x264 video. It PLAYS, but not well. Audio is fine, however video is all jerky in VLC, and in Totem it's just a mess of screen tearing (although the video does no jerk on that player).
I have just spent HOURS trying to get VIDEOS TO PLAY. This makes no sense. Playing a video should be the simplest thing in the world for any OS to do, but with Ubuntu it is like pulling teeth. I have NEVER been so frustrated while using a computer. What is going on here?
BTW, I installed every single thing needed to use mplayer but it still does not work. The player comes up but will not play any type of file.
My system specs: 2.2Ghz dual core (intel) 3gb ram 1GB radeon hd 4650 vid card asus ipibl-lb mobo
I was really enjoying Ubuntu until I got around to trying to watch some videos. Now I am maybe minutes away from going back to vista. At least on there I can watch full 1080p videos if I want with no problems.
I am running kubuntu 9.10 on a dell inspiron 14 with an intel GM45 graphics card. When I run 3d games like Nexuiz, openarena and tremulous I get really random performance. Sometimes things are totally playable with somewhere between 50-80 fps and other times I get maybe 4 fps and everything takes forever to load. These effects persists between logouts, X restarts and reboots! I get no errors when I run things in console either.I never really play games much but when something doesn't work...
iwconfig shows that the bit rate is 130Mb/s and link quality is 98/100. I'm using the wcid network manager instead of defaul gnome one. I'm getting lots of packet loss and performance is very bad. The connection is practically unusable. I've tried installing the compat wireless backport package but that did not work at all.
I have an ATI Radeon x1300 graphic card on an Intel platform.Since the last reinstall my Debian Squeeze system shows very poor performance in glxgears,only 80-90 FPS. Last time I had a problem the kms setting in the kernel. I disabled the modding setting in GRUB and then glxgears showed about 900-1300 FPS. Now,the problem is back, even though I made the same setting in the boot loader.
i had recently loaded squeeze and had an issue with the 2d performance.i.e when being prompted for the root password, the shading slowly draws down the screen several times before stopping at the final darker shade. i then added the linux-firmware-nonfree package. all worked extremely well, even compiz, but i don't really care about compiz. i just loaded it to see how it performed. i then distro-hopped a bit and when i came back to squeeze the first thing i did was add the package, but the video is acting like i haven't added the package. i've searched for an answer, but have not found anything recent to indicate if xorg was upgraded or what.stable lenny works fine, but i'd rather have the nice console fonts when not in x and more up to date apps like gimp and xsane in squeeze.
This is a thread I've moved over from the install forum and is hopefully more focused. Sorry if I have violated some protocol. Problem I have a new machine build configured to dual-boot Windows 7 and OPENSUSE 11.4. Network performance in Windows is very good but network performance in OPENSUSE is very poor.
I have been trying for a few days to install CentOS 5.4 on an IBM x306 and I cannot get it to properly handle the Adaptec Embedded SATA HostRAID Controller. I have been working with Linux for a few years, but this is new territory for me. I typically use Debian-based distros, but I did some research on the IBM site and found out that RHEL is a supported OS for this machine. So, I decided to give Cent a try. I have some experience with Fedora, so it's not totally foreign to me.
Anyway, I'm a bit confused. Using the IBM RAID utility, I set up a mirrored pair of 1GB SATA HDDs. When I run the Cent installer, it sees the pair as a single array. I am able to partition the array and complete the install, but when I boot into the OS, it sees the drives as 2 separate devices, sda and sdb. I can pull either one of the drives and boot with a single disk, but it doesn't seem to behave as a mirrored pair. If I make changes on sda, they are not replicated to sdb. Also, I can't use the cli or GParted to format the existing space on the array. I get an error either way. I believe this is because Cent doesn't have a driver for the RAID controller, but I don't see why it would work in the installer, but not the installed OS.
My next approach was to start over and attempt to run "linux dd" at the start of the installation. I tried to find the driver for the controller on the IBM site so I could load it when prompted, but couldn't find a newer version than RHEL 4 Update 3 (I'm assuming this would coordinate with CentOS 4.3). I tried it anyway, but when I select the floppy during the setup, it tells me it's not for this version of CentOS. I read several times that there are .img files that might help me in the 'Images' directory of disk one, but I only see diskboot.img, minstg2.img, and stage2.img. I don't think any of these are what I'm looking for. I thought there was supposed to be a drvblock.img or driverdisk.img.
My hardware is Tyan 1834 (VIA Chipset) dual P-III and 512 MB Adaptec AHA-2940AU SCSI card with 3 Seagate drives ATI Rage 128 AGP Video card
Tried to install CentOS 5 but did not locate any hard drives (No drives listed at Partition Step)
Was able to install CentOS 4.4 and it located ATI card, and installed successfully and runs.
However, trying to upgrade to CentOS, also fails and does not find any hard drives (same error) (also say "no video found, assuming headless" - but output still shown to screen - using 'linux text' install)
http://www.adaptec.com/en-US/_common/linux/ - lists Linux "supported distributions" including CentOS 4.0/4.2/4.3 - but no mention of CentOS 5 (or could that page be too old?)
Want to use as test platform for custom software development, but it must match existing production machine CentOS 5 platform... (Cent OS 4 could introduce a significant difference)
I am busy to do some tests using the Adaptec 1430sa hardware raid controler .I started the setup by generating the raid 1 aray and it worked ok .I did the regular setup of Cent Os 5.5 64 bits all worked ok but the system do not boot .When I start the box it enter the minimal grub screen .I tried to install a first time on the MBR like suggested and how it need to be done for soft raid setup .nd I tried to install it on the first cluster from the boot sequence like possible on second choice
I built a RAID5 storage array using 'mdadm' on 3x WD Green 1TB hard drives. I used the Disk Utility GUI to create the array and it took about 24 hours to build. When I started copying files to it I noticed it performed at decent speeds for a while, then got really slow, the sped back up. Just for laughs I ran the Read-Only Benchmark function in the Disk Utility and got a graph that would even confuse stock brokers.Any thought on what the issue might be? I tried searching around for an answer, but most people are only affected by wirte performance issues not reading.
I wasn't sure where to post this question so administrators, feel free to move it.I have a media server I set up running Ubuntu 10.4 Server, and I set up a software raid 5 using 5 Western Digital Caviar Green 2TB 7200RPM 64MB drives. Individually they benchmark (using the Ubuntu's mdadm GUI (pali?somthing...) at about 100-120mb/s read write.I set the raid 5 up with a stripe size of 256kb, and then I waited the 20 hours it took to synchronize. My read speeds in raid are up to 480mb/s, but my write max is just under 60mb/s. I knew my write performance would be quite a bit lower than my read, but I was also expecting at least single drive performance. I have seen other people online with better results in software, but have been unable to achieve the results they have gotten.
My bonnie++ results are more or less identical (I used mkfs.ext4 and set the stride and stripe-width).The PC has 2048mb of RAM and a 2.93Ghz Dual Core Pentium (Core 2 Architecture), so I doubt think that's the bottle neck. These drives are on the P55 (P45*) South Bridge SATA controller.
I'm currently experiencing some serious issues with WRITE performance on a RAID-1 array. I'm running Ubuntu 10.04 64 bit server with the latest updates. To evaluate the performance ran the following test: [URL]... (great article btw!) Using dd to measure, write performance is only at 8.7 MB/s. Read is great though at 74.5 MB/s. The tests were ran straight after rebooting and I have not (YET!) done any kernel tuning or customization, running the default server package of the Ubuntu kernel. Here's the motherboard in the server: [URL]... with a beta bios to support drives over 300GB.
As you can see from the bo column there is definitely something stalling. As per top output, the %wa (waiting for i/o) is always around %75 however as per above, writes are stalling. CPU is basically idle all the time. Hard drives are quite new and smartctl (smartmontools) does not detect any faults.
I have recently migrated my file server over to a HP Microserver. The server has two 1TB disks, in a software RAID-1 array, using MDADM. When I migrated simply moved the mirrored disks over, from the old server Ubuntu 9.10 (server) to the new one 10.04.1 (server).I Have recently noticed that write speed to the RAID array is *VERY* slow. In the order of 1-2MB/s order of magnitude (more info below). Now obviously this is not optimal performance to say the least. I have checked a few things, CPU utilisation is not abnormal (<5%) nor is memory / swap. When I took a disk out and rebuilt the array, with only one disk (tried both) performance was as to be expected (write speed >~70MB/s) The read speed seems to be unaffected however!
I'm tempted to think that there is something funny going on with the storage subsystem, as copying from the single disk to the array is slower than creating a file from /dev/zero to the array using DD..Either way I can't try the array in another computer right now, so I though I was ask to see if people have seen anything like this!At the moment I'm not sure if it is something strange to do with having simply chucked the mirrored array into the new server, perhaps a different version of MDADM? I'm wondering if it's worth backing up and starting from scratch! Anyhow this has really got me scratching my head, and its a bit of a pain! Any help here would be awesome, e-cookies at the ready! Cheers
So I have been doing some RAID 5 performance testing and am getting some bad write performance when configuring the RAID with an even number of drives. I'm running kernel 2.6.30 with software based RAID 5. This seems rather odd and doesn't make much since to me. For RAID 0 my performance consistently increases as I add more drives, but this is not the case for RAID 5. Does anyone know why I might be seeing lower performance when constructing my RAID 5 with 4 or 6 drives rather than 3 or 5?