My desktop tower is pretty old and runs horribly slow nowadays. I only use it for a little internet browsing and to upload photographs etc. It runs on 1Gb RAM and has the celeron(R) 2.4GHz processor. What is the best way to boost overall performance speed/response times? would a better processor be viable?
I installed Ubuntu 10.10 today Netbook Edition on the Asus Eee PC 1015PED. Specifications for Asus Eee PC 1015PED: Atom 455, 1.66 GHz, 1Gb ram, 250Gb drive, Bluetooth 3.0, 0.3 megapixel webcam, Gigabit Ethernet, WSVGA (1024x600), sound card compatible with the HD audio connector, d-dub, three USB 2.0 ports, 6-cell - lithium Ion - 4400mAh, WiFi 802.11 b/g/n.
solve the problem or an indication of the package to install. tutorial under the title, which services to disable to boost performance. first place to help you connect to the Internet. especially to the netbook'a - Asus Eee PC 1015PED. I updated everything on connection via cable. Detects the connection, but when it connects after a while.. "Wireless - Network Disconnected".
I just wanted to know if having my laptop set to ondemand, will this affect performance in any way? I realize it increases the clock speed to performance when the CPU is under load, but does the time it take to go from ondemand to performance affect speed? Will there be any noticeable difference between the two setups? I have a dual core intel at 2.2GHz when in performance. When ondemand is set with no load it downclocks to 800Mhz.
I have several server (mailserver, webserver and lot of fileserver) all are 32 bit slackware and I am satisfied! My company plan to upgrade our server and buy a 64 bit server maybe amd x3 or opteron? Is slackware 64 bit will help boost my server performance? Or will I stick to 32 bit for now?
I have the nVidia GeForce 8500 GT 512MB graphics card, I put it in my system to get a speed boost and for a dual monitor setup, I don't have the proprietary drivers installed I tried installing them, and when I did it asked for reboot, so I did, and when it came back up only one monitor was in use, and it was running very very sluggish, so I opened up the Monitors from the settings and it said to use nVidia's thing, so when I did, I enabled the second monitor, and hit apply, and it asked for a restart of Xorg, doing that came back telling me that no monitors were find, and a reboot brings me straight to tty1... I tried both the recent version and the older one, both did the same thing, I really wanna get my Compiz effects back. is there a way to get this working? I will do anything you ask if it solves the problem...
I recently setup a new Linux server running Fedora 10. For some reason all ping response times are rounded to the nearest 10ms. For example, running the simple command "ping yahoo.com" give the following sample results:
64 bytes from ir1.fp.vip.re1.yahoo.com (69.147.125.65): icmp_seq=12 ttl=57 time=60.0 ms 64 bytes from ir1.fp.vip.re1.yahoo.com (69.147.125.65): icmp_seq=13 ttl=56 time=50.0 ms 64 bytes from ir1.fp.vip.re1.yahoo.com (69.147.125.65): icmp_seq=14 ttl=56 time=40.0 ms 64 bytes from ir1.fp.vip.re1.yahoo.com (69.147.125.65): icmp_seq=15 ttl=56 time=50.0 ms
I could post a larger result set but its all the same... every response is rounded to a multiple of 10ms. This wouldn't be a big deal except that the server is running Nagios for monitoring so accurate stats are important. The Nagios check_ping and check_icmp commands are also returning rounded off results. How can I get ping to simply respond with the actual response times rather than a rounded off number?
we bought a dual 12-core Opteron machine (Supermicro H8DGi board). Installed Slackware 13.37 and performed some few tests. We observed that performance quickly degraded as the box became loaded. For instance, a lonely task may took "t", but when running 24 of them at the same time (fully loaded box), it may took 2 to 3 times longer. From some test we did (tinkering with BIOS, moving memory modules, etc) we came to the conclusion the problem was due to a terrible memory managment. Finally we solved the problem by recompiling the kernel and taking the .config file from OpenSuse. Thus, there must be something to be tweaked in the standard Slackware .config file
I have A dell vostro v13. I am have installed LMMS which runs ok if I put minimal demands on it but if I start putting to many demands on it like running More then three Zynaddsubfx plugins or multiple sample players the cpu moniter goes into the red and it starts to sound bad and slow down a great deal. I know a small laptop is probably not the best choice to run a DAW on unless it has a good processor but this is the only computer I currently have and I really like LMMS. Is there any way I can get more performance out of my computer? I looked a little into overclocking but dell Bios Does not allow for this. Is there any tricks that I can do to get LMMS to perform better given the limitations of my system?
I have an older PC running Ubuntu 10.04, and the system is slow, especially with Firefox running. I recently upgraded to 2 gig memory, but it didn't help. I have plenty of space on my hard-drive, using only 30 gig of 80 gig. My CPU is Intel Celeron 2.40GHz. I have broadband, and the speed seems good. What are the system requirements for Ubuntu 10.04? Any tips for speeding performance, esp on Firefox? I used to do a defrag when I had Windows; is there any need to do that on Ubuntu also?
I have recently upgraded to a M4A77TD motherboard and 8Gig of memory.I have a AMD Athlon(tm) II X4 645 Processor, and I cant say that I see much improvement over the one that died. I have noticed that the HDD light is constantly flashing so I wondered if I couldnt use some of the Ram to create a ramdrive to speed things up . So ..my question ... what should I put on my ramdrive? My own thoughts were the var and usr directories. I would load the ramdrive at startup (which would slow down my startup) And Rsync back again on shutdown.Has anyone done this? Ive had a look through the forum but cant see anything ..but my search terms may not be all that good.
suse linux 11.1 64 bit var=0.3G usr=5M lib=0.2G opt=0.6G sys=0.6G As far as I can see swop is never used
I'm new to openSUSE and my computer is quite slow although my computer isn't that bad. I opened up ksysguard and it appears that my CPU is the bottleneck. My CPU usage is usually 100%, then after a few seconds, it goes down to 20-60% and then it goes back up to 100% after another few seconds. It says I have 141 processes running (I don't know if that's normal or not).
My Specs are: CPU: AMD Duron (tm) processor 1.8GHz Graphics Card: NVIDIA GeForce 6200 Memory: 2GB RAM I'm using KDE.
I have recently migrated my file server over to a HP Microserver. The server has two 1TB disks, in a software RAID-1 array, using MDADM. When I migrated simply moved the mirrored disks over, from the old server Ubuntu 9.10 (server) to the new one 10.04.1 (server).I Have recently noticed that write speed to the RAID array is *VERY* slow. In the order of 1-2MB/s order of magnitude (more info below). Now obviously this is not optimal performance to say the least. I have checked a few things, CPU utilisation is not abnormal (<5%) nor is memory / swap. When I took a disk out and rebuilt the array, with only one disk (tried both) performance was as to be expected (write speed >~70MB/s) The read speed seems to be unaffected however!
I'm tempted to think that there is something funny going on with the storage subsystem, as copying from the single disk to the array is slower than creating a file from /dev/zero to the array using DD..Either way I can't try the array in another computer right now, so I though I was ask to see if people have seen anything like this!At the moment I'm not sure if it is something strange to do with having simply chucked the mirrored array into the new server, perhaps a different version of MDADM? I'm wondering if it's worth backing up and starting from scratch! Anyhow this has really got me scratching my head, and its a bit of a pain! Any help here would be awesome, e-cookies at the ready! Cheers
when I try to access any page even small html pages it stays like 3 seconds in HTTP request sent; waiting for response. state..even when I use Lynx locally on the server..bypassing any possible network issues..logs dont show a thing..the server itself is a high end server with nothing running on it apart from apache which is not serving anny clients now, firewall is disabled and hostnamelookups are set to OFF.
Ubuntu 64bit. The sound system works and plays noises correctly when I test the speakers in sound preferances. The internet BBCi player(Radio) plays sound correctly. Banshee & Rhythmbox try to play music files at double, or more, speed with no sound output. Spotify Linux version also tries to playback at double speed with no sound output. Media Player attempts to play music files at high speed. Media player plays the Video and audio tracks at high speed. VLC Will play the video at normal speed but with no audio.
In the menue of Kubuntu 10.10 (live-CD) for selecting the keyboard layouts appears a message saying that there is a limit of selecting maximum four keyboard layouts, if I try to select a fifth one. That is very little four people involved/dealing with languages or for computers shared by several people from different countries. Does anybody knows anything about this limit and how to increase this number? In windows you can select more than a dozen languages. Where is the problem? Is it a problem of the KDE or of the linux kernel or what kind of problem is it? An other user told that this problem exists also in other versions e.g. in Ubuntu.
I was looking for a way to boost the microphone audio (I tried in the menu sound preferences but it did not work) and thefore I installed alsa driver and alsa mixer.Now linux can't find both input audio device and both output audio device.how boost the microphone audio?? (remember that I have just tried with sound preferences menu).
i both have the same problem, i'm trying to burn my images at 4 or 8 speed, but ubuntu 10.04 says that the hardware does not support that kind of speed and switch up to 16 speed and more. i know it can burn at low speeds, at least in windows, it is a bit strange that fast burning is okee, and slow not, what can i do to prevent this? i don't wanna burn to much errors on my discs
i'm surrounded by concrete block wall but have found an unlocked wifi signal. but i need to do something to help boost the signal. my HP laptop has a built in wifi card and i'm looking for something that i could use or do to help boost the signal.
I'm trying to build boost on a 64-bit CentOS 5.4 install. I have Python 2.6.4 built and installed at /opt/Python_2.6.4/, and I've appended the user-config.jam file with:
The standard system Python is 2.4.1 but the tools I'm using require 2.6, so I've built this version and installed it independently of the system version 2.4.1 to avoid any conflicts.
As I'm sure you've already imagined, I get the error:
LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)
I see that this is a long standing bug, but I have yet to find a fix. I've tried various CCFLAGS, CXXFLAGS, etc. to push for a 64-bit compile or a 32-bit compile (-m64 or -m32).
The offending file is pyport.h - is there a 64-bit friendly version that I don't know about?
I've been playing around with this forever now, I am trying to do the example timer using boost asio's example, I keep getting an undefined reference error for boost::system and pthread errors, so after a bit off googling I went back to boosts' to follow their advice it didnt help one bit I know where the boost system lib is /usr/lib and the include directives are in /usr/includes however, when issuing the command they said that needs to be done as in the example with gcc nothing happens just file not find and I am inputting the correct path.
I installed Boost 1.43.0 on Fedora 12 32 bits, using standard build from the source code procedure:
./bootstrap.sh ./bjam ./bjam install
I see all Boost libraries in /usr/local/lib. Now I build my own program depending on the Boost libraries, and it is built successfully. However, when I try to execute my program, it shows "Library not found" message:
Code: [alex@localhost ~]$ sixfpdconsole
sixfpdconsole: error while loading shared libraries: libboost_system.so.1.43.0: cannot open shared object file: No such file or directory
[alex@localhost ~]$ ldd /usr/local/bin/sixfpdconsole linux-gate.so.1 => (0x009d1000) libboost_system.so.1.43.0 => not found libboost_thread.so.1.43.0 => not found
My computer don't have a DVD driver. So I want to install Opensuse 11.3 via a U dish.I burn live CD in U dish. And select Usb-hdd as the first boost. But I am failed.
I have RHEL 5.2, with Boost 1.33 installed. I downloaded boost_1_44_0.tar.bz2. and built it. On completion it showed:
Code: The Boost C++ Libraries were successfully built! The following directory should be added to compiler include paths: /home/dfe/Archive/boost_1_44_0 The following directory should be added to linker library paths: /home/dfe/Archive/boost_1_44_0/stage/lib I have been able to compile programs using CC -I/home/dfe/Archive/boost_1_44_0 -L/home/dfe/Archive/boost_1_44_0/stage/lib yourprogram.cpp
but...it's annoying that I have to add these paths for every program that I compile. Isn't there any way to make version 1.44 the default version (so that I won't have to include these paths when I compile)?
1. When I do "rpm -q boost", it shows boost-1.33.1-10.el5. Why is that so, when I've installed version 1.44? Did I have to remove the existing rpm before building the new version of boost?
2. Is there a better way to install the latest version of Boost?
When I configure the VMware-open-source-view-client using the command configure --host=sh4-linux CC=sh4-linux-gcc CXX=sh4-linux-g++ to configure the VMware-open-source-view-client which is proper running on the ARM architecture.(sh4-linux is a platform in ST).
configure:10578: result: no configure:10518: checking for exit in -lboost_signals-mt configure:10578: result: no configure:10518: checking for exit in -lboost_signals-mt configure:10578: result: no
As governments around the world amass armies of hackers to protect their countries' computer networks and possibly attack others, the idea of getting officials together to discuss shared threats such as cybercrime is challenging.
"You just don't pick up the phone and call your counterparts in these countries," said retired Lt. Gen. Harry Raduege Jr., former head of the federal agency responsible for securing the military's and the president's communications technologies. "They're always guarded in those areas, and they're always wondering if there's some other motive" behind the outreach.
So the idea behind an international security conference in Dallas this week is to get government officials, industry executives and others talking, informally, about where they might find common ground.