Ubuntu Servers :: Setting Up A Cache Improve The Performance?
Feb 26, 2010
dns cache serThis is probably more of a network question but I figured some one who is a network expert might know. Currently my organization has DNS servers. But my questions is would setting up a cache server improve the performance any? When I first thought about it i thought probably not. But since it stores information in ram that made me think maybe it would improve network performance a little.
Basically I'm wondering if there is any way to lighten Gnome and Ubuntu I would like to keep Gnome if possible. I am a pretty experienced linux (or for you hardcore GNU fans GNU/Linux) user having used it for almost 4 years and I just built my Arch system but have found that alot of the functionality that I've come to love about ubuntu isn't in the default Gnome package but that Ubuntu's Gnome is heavily modified so I want to switch back but do to my lack of modern hardware I can't run Ubuntu as smoothly as I want.
Below is my current hardware. Code: Intel Pentium III 733mhz 512mb of ram 8gb Hard Drive and a Dvd Drive Nvidia Geforce 6600 265mb pci gpu 100 watt power supply
I am also a Developer so I know I can compile the Kernel my self and remove some not needed junk and optimize it. But I was wondering are there some Highly intensive processes that don't really need to be running? The only thing I would be using Ubuntu for is Web Browsing, Coding, Gimp, Text Processing and probably Music; thats really all I need I don't do much else besides that. tl;dr: Basically all I'm trying to do is lighten Ubuntu and Gnome without putting 3 days worth of work into it.
I am running a ubuntu desktop machine as a server and use VNC from my windows machine to login via a LAN to the ubuntu machine. The login session is very sluggish and frustrating. I installed gnome-rdp to see if it would be better but I don't know how it works or what to do and if there is something else I can do to improve the performance. I have 3 gig of ram and the server is a dual celeron machine
I have a Gateway Laptop which is dual-booting Windows XP SP3 32-bit and Ubuntu 10.04, also 32-bit. The 64-bit version, would not install on my computer, even though the computer has 64-bit capabilities. It doesn't bother me that I use the 32-bit version, but something it is now doing seems to be affecting the way things work on my laptop. The computer has 4GB of RAM in it, an AMD Turion 64 X2 processor, and an ATI Radeon X-series graphics card. The monitor has HDMI capabilities. On the Windows side, it handles full-screen programs and operates very quickly. On the Linux side, I can also run things quickly. However, most programs on the Linux side are much slower-running than they would be on the Windows side.
Something I notice when my laptop goes into fullscreen on the Linux side, is that the color quality goes way down. You can see that it is trying to run in apparently 256 colors, and each individual pixel is very visible. It does not do this on the Windows side. Also, programs that I run on this half of the computer are very laggy, slow, and inefficient. I know that my computer has the video and processing power to handle these programs with ease, but it isn't utilizing all of it. How can I make Ubuntu run at a higher speed overall, by taking advantage of all four gigs of RAM and this 2.4 GHz Turion processor to run everything like Windows does?
I have got a Radeon X800XL built in my computer. I was able to play Quake 4 in high details once, so I guess my PC is quite fast. Anyway, I have trouble playing Anno 1503 and SuperTuxKart (sic!). While Anno 1503 is fairly unplayable, SuperTuxKart has got around 30 to 90 fps, depending on the situation. In my opinion, that's way to less.
My question is: How could I improve the 3d performance, to be able to play such legacy games as Anno 1503? Therefor I'll give you some information about my configuration and what I tried so far.
I run Ubuntu 10.04. That means, the driver provided by AMD/ATI (fglrx) does not work anymore. My Ubuntu is up to date.
I set the kind of graphics card and the graphics ram in the Registry of Wine. Disabling the compiz-fusion-effects does not improve the situation however.
I disabled KMS. That gave an amazing performance boost, but still not enough. I am still experiencing performance troubles like being unable to play Anno 1503, and some others.
I also created a Xorg.conf, trying to tweak some settings, but that does not improve the performance much. The config file is attached below.
How to improve or accelerate the 3d-performance. Maybe there is an beta-driver or some xorg-settings I did not find?
I have written a script as follows which is taking lot of time in executing/searching only 3500 records taken as input from one file in log file of 12 GB Approximately. Working of script is read the csv file as an input having 2 arguments which are transaction_id,mobile_number and search the log file having these two strings with one more static string that is "CustomCDRInterceptor",then format the searched data in prescribed format.
I'm trying to do a partition alignment on my main SSD to improve SSD performance and then install Ubuntu on the SSD. I can do the alignment with no problem but when I install ubuntu the alignment is erased. Is there a way to install ubuntu without getting rid of the alignment?
I was using centos for my business applications and now I am trying to work only with opensuse and install my other oprerating systems in it. I was always using vmware , but I decided to try another virtualization technologies other than vmware for testing , I searched the internet and found many other like virtualbox , kvm , xen. I concluded from my search that xen and kvm will be the faster type , I decided to test them, I choose xen, it is better than kvm. I installed opensuse 11.4 and installed xen hypervisor deployed two VMs windows xp and centos 4.8 , they are runing quite good but I have some questions:
1 : Isn't there anyway to improve graphics performance in xen guest , or change the video card memory or type ? 2 : Isn't there any way to copy and paste between the host and guest ? 3 : Isn't there any free application like vmware tools or virtualbox guest tools for xen ? 4 : I use these VMs to install some applications for my geophysics work which requires good graphic performance in the vm , also I don't them to be sluggish sometimes , which is better for that vmware or xen ?
I'm using mplayer and libcaca on Gentoo. My framebuffer (uvesafb) is running at 1920x1200 (I don't know how many characters that is) and mplayer has problems filling up the screen, so video and audio lose synchronization.
So I have built a program that takes a picture from two cameras every second and converts them both to jpeg format. The problem is that currently it takes ~2 seconds to convert a single raw photo to jpeg format, thus every second I add another raw photo (30 MB) to ram waiting to be converted to jpeg. So, theoretically the conversion to jpeg is running on a single core with hyperthreading, would I see better performance running the exact same process (a program pulling from a queue and converting to jpeg) running as a single process, or two concurrent processes? (both processes running on the same core, (so 1 thread on one clock cycle, the other on the other... (or one thread running on 1 core and the other on another core. What other steps would you take to improve the performance so there would no longer be a race condition?
I was laughing about klackenfus's post with the ancient RH install, and then work has me dig up an old server that has been out of use for some time. It has some proprietary binaries installed that intentionally tries to hide files to prevent copying (and we are no longer paying for support or have install binaries), so a clean install is not preferable.
Basically it has been out of commission for so long, that the apt-get upgrade DL is larger than the /var partition (apt caches to /var/cache/apt/archives).
I can upgrade the bigger packages manually until I get under the threshold, but then I learn nothing new. So I'm curious if I can redirect the cache of apt to a specified folder either on the command line or via a config setting?
I've just bought a 6-core Phenom with 16G of RAM. I use it primarily for compiling and video encoding (and occassional web/db). I'm finding all activities get disk-bound and I just can't keep all 6 cores fed. I'm buying an SSD raid to sit between the HDD and tmpfs. I want to setup a "layered" filesystem where reads are cached on tmpfs but writes safely go through to the SSD. I want files (or blocks) that haven't been read lately on the SSD to then be written back to a HDD using a compressed FS or block layer.
So basically reads: - Check tmpfs - Check SSD - Check HD
And writes: - Straight to SSD (for safety), then tmpfs (for speed) And periodically, or when space gets low: - Move least frequently accessed files down one layer. I've seen a few projects of interest. CacheFS, cachefsd, bcache seem pretty close but I'm having trouble determining which are practical. bcache seems a little risky (early adoption), cachefs seems tied to specific network filesystems. There are "union" projects unionfs and aufs that let you mount filesystems over each other (USB device over a DVD usually) but both are distributed as a patch and I get the impression this sort of "transparent" mounting was going to become a kernel feature rather than a FS.
I know the kernel has a built-in disk cache but it doesn't seem to work well with compiling. I see a 20x speed improvement when I move my source files to tmpfs. I think it's because the standard buffers are dedicated to a specific process and compiling creates and destroys thousands of processes during a build (just guessing there). It looks like I really want those files precached.....
I needed a larger cache because I have some videos stored on another samba server and it's laggy. I set options: cache=20000, cache-min=10 , and that helped to play those videos smoothly, but that caused all 1280x720 mp4 files stored on my local drive to lag and A/V desync with mplayer message: **** Your system is too SLOW to play this! ****
I tried cache values from 1000 to 80000, and they lag in any case. But without the option "cache" these videos play well. Now I commented "cache" in config.
I thought about moving the firefox cache directory to /tmp, as I am moving /tmp to the ram, since I just got an SSD, and move off this write-hungry feature.However, with Firefox 4 beta 12 (shipping with 11.4 x86_64), when I get in about:config, there is just no option browser.cache.disk.parent_directory, which you would use to configure this.I searched a bit and it seems that this is still intended to be on Firefox in v.4.I might as well downgrade to 3.x for the moment. I see there is an RC available of v.4, but no package of it ready in repos.By the way, any thoughts on putting the firefox cache on ram? I know it will flush it at reboot! But the alternatives are:
-leave it on ssd (wear it) -put it on hdd (so slow) -disable cache
Had anybody ever tried flashcache [URL] under ubuntu? My idea is to increase the throughput of a small server especially for database operations by using an SSD as cache.
There is a tool appeared in repository called ktune; The purpose is to adjust some sysctl.conf settings to improve server speed on servers with heavy load. What is this tool for if one can achieve the same with the configuration file added to system startup? Or ktune is just such file?
Is it possible to configure a cron script to update the packages in /var/cache/apt-cacher/packages? When a client machine updates a package, apt-cacher checks that it's cached package is up-to-date, and downloads a new version if it is not.
I'd like apt-cacher to check it's cached packages every night and download any updated ones, on the premise that since it exists in the apt-cacher cache, someone has that package installed and is going to want to update it. Is this possible? Does apt-cacher do this anyway, and I haven't noticed?
I'm working on a web site. I'm sometimes finding that when I update a file, some mysterious force intercepts my changes and refuses to let me see them. I have confirmed that this mysterious force is a server cache.By adding a "?" to the end of the URL, I can see the changes I've made. Is there a way to somehow force my Internet provider to update the server cache?
i am running Ubuntu Lucid x64 as a fileserver that shares its files via SFTP, NFS and Samba. Currently the hard disks are configured to go to standby if they are not needed. This works perfectly as long as no one browses the shares or my HTPC is running: That one repeatedly looks through the shares for new music or movies. In other words my problem is that the disks are spinning up a lot more often than they should have to. Additionally the spin-up time delays the response time while browsing. Since the machine has a lot of unused RAM i want to tell the kernel that it should keep the directory structure in memory. That way the disks would not need to spin up every time someone browses through the directories.
I used the Ubuntu server 10.10 for cache and proxy purpose. I install squid 2.7 stable 9. My problem is to cache the some url by force. eg . [URL]...I search the clues by using google. but I only found how to block the url. so I come here and request the advice. I want to cache the couple of site because of our country have bandwidth problem.
I recently installed a new home backup server with Ubuntu 9.10 x86_64 using the alternate CD. I used the CD's installer to partition my disk and created a software RAID 5 array on 4 disks with no spares. The root file system is located outside the raid array.
At first the array performed nicely but as it started to fill up, the io performance dropped significantly to the point where I get a transfer rate of 1-2MB/s when writing!
Recently set up a webserver at Linode. I've been reading alot about tuning the mysql, but other than hitting web pages and seeing how fast they load, how do I tell how well my tuning is working?
how do you measure performance on a computer?I know there are benchmark sites, they do give a general guidance in selection. However, I want to learn how to build a cluster from commodity parts and want to make sure it is equivalent to a specific server in performance.I know clustering is a bit abstract and it will be difficult to measure direct performance and compare it to one specific board. I am fine with that