Ubuntu Servers :: How To Measure Performance On A Computer
Aug 3, 2011
how do you measure performance on a computer?I know there are benchmark sites, they do give a general guidance in selection. However, I want to learn how to build a cluster from commodity parts and want to make sure it is equivalent to a specific server in performance.I know clustering is a bit abstract and it will be difficult to measure direct performance and compare it to one specific board. I am fine with that
Recently set up a webserver at Linode. I've been reading alot about tuning the mysql, but other than hitting web pages and seeing how fast they load, how do I tell how well my tuning is working?
I just wanted to glean some sort of a general average and compare my system with everyones. post your computers:boot time of course hardware specifications (processor, HDD, RAM, etc.) distribution if it's a laptop or desktop (or a netbook ) Mine is 43 seconds, running Ubuntu 9.10 on a netbook. My hardware specs: Intel Atom 1.6 GHz 320 GB 7200 rpm HDD 2 GB RAM
Any command line tool to measure the network speed between my two linux servers without taking disk speed into account? My network is supposed to be 100Mb, but it doesn't feel like that so I wonder where the bottleneck is. The numbers that I see doesn't correlate well to that. So I'd like to know the speed network card to network card.
I wanted to know if i can install mrtg on a client computer in network and measure the network's router traffic.i know that it can be installed on the server.
I recently installed a new home backup server with Ubuntu 9.10 x86_64 using the alternate CD. I used the CD's installer to partition my disk and created a software RAID 5 array on 4 disks with no spares. The root file system is located outside the raid array.
At first the array performed nicely but as it started to fill up, the io performance dropped significantly to the point where I get a transfer rate of 1-2MB/s when writing!
dns cache serThis is probably more of a network question but I figured some one who is a network expert might know. Currently my organization has DNS servers. But my questions is would setting up a cache server improve the performance any? When I first thought about it i thought probably not. But since it stores information in ram that made me think maybe it would improve network performance a little.
I'm currently experiencing some serious issues with WRITE performance on a RAID-1 array. I'm running Ubuntu 10.04 64 bit server with the latest updates. To evaluate the performance ran the following test: [URL]... (great article btw!) Using dd to measure, write performance is only at 8.7 MB/s. Read is great though at 74.5 MB/s. The tests were ran straight after rebooting and I have not (YET!) done any kernel tuning or customization, running the default server package of the Ubuntu kernel. Here's the motherboard in the server: [URL]... with a beta bios to support drives over 300GB.
[code]...
As you can see from the bo column there is definitely something stalling. As per top output, the %wa (waiting for i/o) is always around %75 however as per above, writes are stalling. CPU is basically idle all the time. Hard drives are quite new and smartctl (smartmontools) does not detect any faults.
I built my own file server based on the Intel Atom 525 and Ubuntu 11.04 (amd64):http:[url]....It has 2 2TB Western Digital Green drives connected via SATA.Internal file transfers (disk-to-disk) using Nautilus zip along at 80 Mb/sec. Over SAMBA, however, I'm getting 35 Mb/sec. Other than creating the shares, I haven't modified smb.conf.have a gigabit network. I've run atop on the receiving computer and it isn't being taxed. At 35 Mb/sec the file server is also not being taxed.
Should I be focusing on testing the onboard NIC (RealTek 8111E) in the file server or looking at SAMBA?
It is vital to get a useful server performance monitoring tool that prevents growth related performance issues. Moreover, it should offer long term capacity planning and trend analysis along with detecting performance issues and unwanted outages.
I installed a month or so ago a Samba File Server along with Active Directory integration in my company. I choose to install on my newly created raid array, all in ext3 filesystem. The purpose of this fileserver is to have lots of files from the different departments on the company (all windows workstations except mine). Everyone has a private folder and a department folder, along with the common folder for all employees.did I made a mistake formating all to ext3? would I get a significant increase in performance if I resize the current ubuntu partiton and created a new NTFS new one and move the files to it?
I am in the process of running a set of performance tests for the latest Sun JVM 1.6.0_20. I am using for that the dacapo test suite: [URL]
I ran the test suite very often with all sorts of settings, but recently it happened once that my Ubuntu system froze. I could still ping the machine, but nothing else was responding any more, no screen output, no ssh login, not possible to switch consoles. After rebooting the system the system logs were quiet. No single trace of any problem.
I am using a custom compiled kernel 2.6.34: Linux i7 2.6.34-custom-201005231602 #1 SMP PREEMPT Sun May 23 16:06:01 CEST 2010 x86_64 GNU/Linux and I am experimenting around with the -XX:+UseLargePages JVM switch the requires to set-up the hugetlbfs on the Linux system: [URL]
A similar issue happened in March 2010 on one of our CentOS 5.4 systems where we run a heavy load Java application on, where we had to hard power off the machine and after the reboot there was no trace of the problem in the logs. On that server we used JDK 1.6.0_17 and did not use hugetlbfs.
My first question would be what to do so that next time something like that happens I have more information available after the incident to debug and analyse the problem?
I run a dedicated specialty Quake 3 Arena Server.It currently runs a stock Debian 5.05. These are the hardware specifications.
256mb SD Ram 10gb Hard Drive Intel Celeron
I think I should be getting more speed then I am.I would like to install Ubuntu Server.What version is the most stable, and will provide the best speed?I have to download my server files from the internet. Is this possible without the GUI?Is there anyway to control my server remotely, without any impact on performance, VNC is a huge impact.I want to run a mail server as well, is this possible with out a performance hit?
I am using MonoDevelop with SQL Server whenever I try to do use the Select method on the Datatable object the performance really detoriates it is 3 times slow than it is with .Net. Are there any work arounds?
anyone of you could share if you have been using ubuntu for large scale website or critical mission project, say for 500.0000 secure transaction per 3 hours with 4 million users accessing server. how does ubuntu perform?
It stores all my important stuff, as well as some music and movies.I use a second linux box in my living room to "stream" content via NFS or SAMBA share.The streaming tends to stop several times during playback, and needs to fill up its buffer again before continuing to play.I also have some Windows XP and 7 based computers that connect to this file server.I have noticed that directory listing is VERY slow, and there is a huge lag when I want to save/read a file from/to my home directory.
This is my setup:Ubuntu Server 10.10 64 bit (I have the same problem with 32bit ubuntu) 3 RAID5 arrays with 4 hard drives eachLVM on top of the 3 raid5 arrays.The Logical Volume i use is about 6.5TB, and I use the ReiserFS file systemThis LVM has grown over the years, and has had som replaced disks. So I have used the pvmove, and extend commands a bit.I have tried using IOTop and top to check if there is not enough resources available, but that doens't seem to be the problem.I haven't been able to find out why streaming over the network stops, but I know it is the server that causes the problem.Does ReiserFS have any performance problems with large logical volumes? Would changing to EXT4 or some other FS give any performance gain?
I just wanted to know if having my laptop set to ondemand, will this affect performance in any way? I realize it increases the clock speed to performance when the CPU is under load, but does the time it take to go from ondemand to performance affect speed? Will there be any noticeable difference between the two setups? I have a dual core intel at 2.2GHz when in performance. When ondemand is set with no load it downclocks to 800Mhz.
The kvm and qemu packages are integrated into a single package, but this doesn't mean qemu is integrated into the kernel now in any way does it? I should still install kqemu to the the performance improvements, right?
On F13 i386 inside Virtualbox. Host is a Win7 x64 box.
This may just be more idle curiosity than anything else since the server still works. . . but it's really bothering me because I can't explain it.
My (limited) understanding is that the devel files are just the source files and shouldn't actually affect peformance.
Yet, prior to installing httpd-devel (and its dependencies), /etc/init.d/httpd restart takes about 2 seconds total. After installing httpd-devel, the "stopping" phase of the httpd restart take 5-7 seconds (which seems really long when you're just sitting there waiting on it.
yum install httpd-devel also installs the following for dependencies: db4-cxx db4-devel cyrus-sasl-devel apr-devel apr-util-devel openldap-devel expat-devel
All but one of those are devel files as well. And here's the real kicker: Uninstalling them through YUM doesn't put things back to normal (even after reboot).
I'm no linux expert. But I wouldn't think installing devel files would have this effect or any effect like this since they're just source code files (?). Can anyone explain what's happening or give me tips on how I can watch the apache process stop and see what's hanging it up?
I want to measure how much Internet I use on a day-to-day basis. I don't mean how much time I spend on the Internet. I mean volume of data; i.e. how many Mb of data I download and upload, including everything: surfing, on-line backups, emails, IM, VOIP, updates, streaming, and so forth. I don't need this broken down by type (though it would be nice); I just need totals per 24 hour periods. How can I get these statistics? Do I need to install a special program?
Any good ideas for a GUI program that monitors temp for CPU and hard drive? I'd prefer one that is like a system monitor that can show trends on a graph (like CPU usage, etc). It is irritating that you can't even add a silly app to the panel because UNE is locked (ERR, Ubuntu, what were you thinking? Give US the choice! Rant over) So far, I have just written a quick script after installing lm-sensors and hddtemp and run the necessary setup routines that will show me, but I would like some of the history data.
I want to install Ubuntu Server on my server, but I can't move my monitor. What I have tried to do:
1. Install Ubuntu Server onto Computer A. 2. Move Computer A's hard disk into Computer B. 3. Boot Computer B without a monitor. 4. Use SSH to get a remote terminal. This is where my problem is.
Is this possible? I told the installer to install the OpenSSH Server, so it should work, right?
I installed opensuse 11.3 to my laptop.Toshiba L505-13w satellite. core i5 2.27 ghz , 4gb ram, 1gb ati display.I can't measure cpu temperature. I tried "acpi -t" and sensors but nothing happened.I also tried system information widgets from plasma menu.Still I can't see my cpu temperature. Can anyone help me about this problem? I want to see my cpu temp.
Lately, my internet connection stops for several minutes and then re-connects. Since there are few issues to debug here (router, modem, ADSL, internet provider), I would like first to measure the failure rate. Is there a utility or script that will do the following: Wake every 1-2 minutes, verify a certain web page is available (doesn't matter which). If it fails: dump a line to a log file and get back to sleep. If OK - get back to sleep. Is there such a utility?
how can I measure time of N processes and N threads and then compare this time to prove that threads are faster than processes. understanding C code, or also for some good way to measure time of N processes and N threads for C.