CentOS 5 Server :: Improve Server Speed On Servers With Heavy Load?
May 4, 2009
There is a tool appeared in repository called ktune; The purpose is to adjust some sysctl.conf settings to improve server speed on servers with heavy load. What is this tool for if one can achieve the same with the configuration file added to system startup? Or ktune is just such file?
I am new to Centos and linux in general. I have just got myself a Dell 1950 server with 2x 1T SATA2 hard disks in it. now the server comes with a PERC5i Raid card with 512Mb. Well I put these in raid 0 and the raid card initilzed the disks to 128 writeback and read ahead. When I loaded centos it did not recognize the dell layout so therefore wanted to initalize it again, so I done this as it wanted. Now I created a 80gb boot and o/s partition and a 100gb swap the rest was created into LVM space to run solus vmwear. But I found the raid 0 to be getting extreamly slow read and write speeds.
Example same disks Desktop PC max 214mb/s windows vista 64 Server with centos 104mb/s
Now I am not sure but I am told that I need to align the o/s with the raid card settings but I have no idea how to do this. How to do this in plain easy step by step instructions. I mean how to calculate it, how to format the disk this way, and what files to edit where if needed. I have spent hours trying to figure out why my raid 0 is slower than a single disk.
improve rendering speed in Fedora? I notice it's very, very slow compared to Firefox on Windows, and Safari on Mac. I know my hardware is quite old, but it should behave similar to Windows...
Thinkpad T42 Pentium M 1.7Ghz 2GiB RAM ATI Radeon 7500 Mobility 32MiB 320GiB HD@7200rpm
Can I run/compile programs on my remote server and control it using LAN? How do I do it?
Long Version:
I have a spare computer here. It's rather powerful for my daily needs (Athlon X2 6000) and I want to set it as my HTPC/Server. Don't worry about HTPC, it got Digital Audio output and a Radeon HD3200 with HDMI out, outputs 1080p pretty well under Windows (I'll setup it for Ubuntu when I get everything straight). So, I thought "Well, I think it's going to be overkill to let such machine as a HTPC only. I'll use it as a server too!". So far so good, but I don't know anything about servers or such. I would like it, as a server, to compile huge programs and handle big compressions (I'm actually with a Celeron ULV 1.2GHz netbook for such tasks, takes ages). Also, it would be pretty nice if I could control the server with my netbook. Like, Start/Stop music/movies with my netbook, control the Boxee/XBMC (Media Players) with the netbook, and, if possible, control its mouse.
Btw, will Ubuntu 10.10 support Hardware Acceleration for ATI Radeon HD3200? Over Windows' DXVA it works OK.
Is there a way by which i can improve my Internet speed. I have a 100Mbps connection but the download speed is only 100kbps. I know that my ISP has limited by connection speed, but i am curious to try as to how i can get the maximum speed.
I am Install Ubuntu Server 10.04LTS and install 3 LAN Card. 1st & 2nd LAN are use ISP1 & ISP2 with public static IP and 3rd LAN connect with Local Switch. How to configure this server as Load Balancing Internet and configure Proxy Server with user authentication and user wise site blocking.
i run right now digikam and rebuild my thumbnails. It seems like that this task takes a lot of cpu power and its not digikam itself, rather xorg takes the load. Is this right? I just wonder.
how the bash script should look to copy huge directory with multiple sub-folders to a new place place while checking load and stopping for several seconds if load reached lets say 3 or 4 ? I only know the simple command cp -r /dir/allfiles /dir/newplace However would like to copy over 30 000 files which will cause me a high load.
This is my first time to test piranha and I can't understand couple things: 1. how to setup public floating IP address between active and backup piranha node? 2. does piranha nodes have to run on public IP? 3. how to connect active piranha node with backup nodes? 4. does piranha only support http and ftp? 5. do I need to create common login for load monitor so that piranha could login to real servers and check the load ? 6. what is the hardware requirement for piranha to run on heavy loading site? I am using VMs all the time. I am using web gui interface with minor file editing, but I prefer done it with GUI. Currently running Centos 5.6 x86_64.
I am completely stumped. Found my web server (lighttpd) unresponsive this morning and had to hard cycle it. After some cleaning up, all was happy. However, after about an hour of handling a decent amount of web traffic, time freezes as far as the web server is concerned. I've got an hour of access.log data with the following date: [23/Aug/2010:20:42:58 -0500]
It never changes. The load average now reads 0.00 0.00 0.00 (which is totally inaccurate). And I am not able to successfully log in remotely. I have taken the server down 3 or 4 times today and after an hour or so of functioning normally, this is what happens. Additionally, the local time is now off by 30 minutes.Trying to force with ntpdate does nothing (or, at the very least, sets it to the same incorrect time)
Does anybody have experience with linux tuning. I`m realy interesting about sysctl.conf tuning settings for batch(3d rendering, phisycs simulations, etc.) servers. Does anybody has an experience with linux tuning - I mean memory and CPU settings for heavy loaded systems. What kind of settings You have in Your clusters ? I`m working with Red Hat Enterprise 5 x86_64.
I've got an amazon EC2 instance running Natty 11.04. I want to harden this server and make sure it's very secure as I ultimately will be handling sensitive data. I'm wondering what should be in /etc/apt/sources.list. Can anyone comment on these contents? Or, better yet, recommend a good secure sources.list file?
Code:
## Note, this file is written by cloud-init on first boot of an instance ## modifications made here will not survive a re-bundle. ## if you wish to make changes you can:
I installed openbox and pcman on my old desktop and was able take the desktop ram load from 275mb to 120mb. (openbox, pcman, kupfer, gnome-panel) I was so pleased with the increased headroom and performance that I wanted to do it on my macbook. Usually my macbook has as desktop ram load of about 350mb... which I find excessive as that is heavier than OS X. That's running gnome, nautilus, compiz, and gnome-do at startup.
I performed a similar procedure today to create the same setup on my laptop and... strangely, my ram usage on the desktop is still close to 300mb! I've brought up system monitor and added up the ram of all the processes it detected myself and it only added up to about 50mb. How is it that my desktop is still so heavy in a minimal environment?? (both machines are Karmic 9.10, by the way. The desktop is hardwired to the internet while my laptop uses Wicd)
recently i setup new LAMP server , after some days faced a strange problem? suddenly the server load goes to 100 or 80 but there is no wearied process running? the normal lod is between 0.5 to 1.5? the server have 2 hdd on hardware raid 0
I am using KVM and created four guest Operating systems on it.The server host is Ubuntu 10.04.I am using 4 websites in a reverse proxy environment.One of our website is running on CentOS VM.Right now there is no traffic on the website static HTML pages.I do not have any clue as why it was taking longer time to be accessed.
Our CentOS 5.5 server has an intermittent problem where kswapd0 begins using 100% CPU, driving the system load to 20-30 and higher, and eventually crashes the server. The problem seems to be triggered by an intensive Java process (Lucene indexing), but only once every month or two. Lucene reindexing normally runs every 15 minutes without a problem. When the problem happens, there is still plenty of free RAM (as measured by /usr/bin/free's "buffers/cache" value, which is 1.5GB) and free swap space. The server is running MediaWiki 1.15 with the standard CentOS Apache, PHP, MySQL. My intuition is that this is a kernel/swapper bug.
Kernel info: $ uname -a Linux myhost 2.6.18-194.11.1.el5 #1 SMP Tue Aug 10 19:05:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
Memory info: Normal memory usage: $ free total used free shared buffers cached Mem: 3090688 2769932 320756 0 258416 1531780 -/+ buffers/cache: 979736 2110952 Swap: 2097144 8212 2088932
Memory usage 30 seconds before a crash: $ free total used free shared buffers cached Mem: 3090688 3050764 39924 0 113772 1309972 -/+ buffers/cache: 1627020 1463668 Swap: 2097144 179480 1917664
I just want to tell laptop users how they can protect their hdd from heavy load cycle count (spin up/down of the hdd) when using battery.If you are not sure then just type this command several times in interval of 1-2 minutes, and you will see how much the hdd head spins down, and up: Code: sudo smartctl --all /dev/sda | grep -i load_cycle.Change /dev/sda with appropriate hdd path for your pc.
I have recently begun contributing to the Folding@Home distributed computing project. Due to oversights in the application's design and to process the most advantageous work units, I need to run this command line app through wine. The application is very CPU/RAM intensive, but it does not put much load on the hard drive. It is multi-threaded and uses up to 100% of all 6 cores in my machine. I run it with a "nice" level of 19 with hopes that it won't slow down my normal desktop use.
Here is the problem... when running this setup, my system will frequently freeze for a fraction of a second and then resume. With the more complicated work units, this can happen 1-2 times per minute. During the freeze, the mouse will stop moving, videos stop playing, music pauses, etc. The system is completely unresponsive. However, each instance only lasts for a very short time. Since this happens so often, my productivity is negatively impacted (and it's very annoying).
I previously ran the native Linux version of F@H without this problem, but that was also processing much less complicated calculations. I have tried with wine 1.2.1 and 1.3.5 with the same results. The application does not have problems running on Windows. It has been suggested that the current Linux CPU scheduler is to blame, but is there anything I can do to resolve this now or work around it?
this is with both network manager and WICD, a problem i've had since i got ubuntu - multiple wipes and reinstalls the wifi is supported out of the box, the problem i've randomly run into since installing is, it will randomly stop detecting networks "no networks detected"
i've noticed over time as this as been happening, that it usually happens whenever there is a high load - multiple videos going on multiple tabs, videos queued up etc. its never happened during regular browsing - theres always a lot going on when it happens, almost like it collapses. the only way to fix this, is to quit the manager - shut off the computer (restarting doesnt work) and then it comes back after
I got a dedicated server ; datacenter told me that I have a 1000 Mbps Public & Private Networks uplink/downlink . How can I check from console if they are saying the true ?
Also , how can I get info about the server network card from console ?
My LAN has 2 PCs installed, Ubuntu 10.04 and Windows XP. I run the server on Ubuntu, and client on Windows XP. Because I am doing stress test, so the client will keep sending tons of packets to server.
The strange thing is: After few seconds, the client program crash because of insufficient network buffer, the server is still ok. But after that I cant connect Ubuntu PC anymore until I restart it. And I check the router, the led for the Ubuntu PC is always ON (not blinking), look like it is jam already.
Is there a way to switch a internal card slot from /dev/mmcblk0** to /dev/sd** ? I'm backing up (via rsync) an ebook directory (1.9gb) to sd card and get massive i/o errors, card turning read-only,etc. This is via a built-in slot on a toshiba laptop. When I use a card reader, the card attaches to /dev/sdd1 and all is fine.
This is a new Sid install. The laptop's card slot worked fine under Ubuntu 9.04 and still works if the amounts of data are small. All I can figure is that the mmcblk device dies under a heavy load...
I seem to be having a strange problem configuring Piranha to load balance (Direct route) 2 ports across 2 w2k3 servers in a test environment. What is strange is that 1 of the ports are working fine but the other port doesn't work. I've read many how-to and after many frustrating hours I disabled the firewall, iptables and arptables services and one of the ports are load balanced across the 2 real servers. Here's the environment.
[Code]....
I can telnet from the client to the realserves on both ports and it's works. When I telnet to the VIP only one port gets through and the other gives me "could not open connection to host port 32777 : connect failed. The configuration in Piranha for one port is the same as the other. I can't help but think that some other configuration for port 32777 was missed.
I have a few mail servers, a mail log server and a web server running on Centos 5. Now I have a task: to avoid accidental crashes on the production servers while installing updates, my boss asked me to do clones (these clones will all be VMware virtual machines) of the servers (EXCLUDING the actual e-mails and log contents) and then to run those clones on VMWare Server. This way, first I will install and test updates on the clones and - if they will be running without crashes - I will apply the updates on the real production servers themselves.
I have already installed VMWare Server 2.0 I have a few questions: How do I build the virtual machines to exclude the actual mail files and mail logs? Can I use VMware Converter for this purpose, or do I have to use another program? How do I actually do this cloning? Is there a tutorial on how to do this?
I got this type of messages:k journald starting. commit interval 5 seconds.EXT3.fs:mounted file system with ordered data modefreeing unused kernel memory : 212 k freed.Warning:unable to open an initial consoleAfter this Server is not hang state but stay at same Anybody can help me how to resloved this type of issue.
I'm trying to create new RAM image file to get my server load raid1 module upon start, I was following redhat documentation & it suggested to use the following command mkinited --with=raid1 inited-raid1-$(uname -r).img $(uname -r) However after running this command I'm getting this message No Kernel available for 'inited-2.6.18-128.el5"