Ubuntu Servers :: Gzip Slowing Down Apache - Force Lower Priority?
May 12, 2011
I'm trying to dump a mysql database on a small web server without killing performance. I tried using the nice command to give the mysqldump and gzip a low priority, but gzip is still taking up 100% CPU. Pages on the web server are loading incredibly slow. Here's my command:
Code:
nice -n 19 mysqldump -u USER -pPASSWORD DATABASE | nice -n 19 gzip -9 > OUTFILE.sql.gz How do I get gzip to run without taking up 100% CPU? I've attached a screenshot of top about 8 seconds into the dump.
I run a linux box as a gateway behind a satellite modem. The internet link over the satellite modem is only 1mbit so the usage often reaches 100% when someone is downloading/uploading something. I am seeing my ping return time jump from 700ms to 6000ms if someone tries to upload a file (by sending a attachment in a email etc). The satellite operator is saying this is normal, but I have my doubts.
Has ICMP got a lower priority? Should I really be seeing this behaviour? I understand that if it was a TCP packet then it would just be queued until the previous acknowledgement has been received. And if it was a UDP packet then it would have been dropped, but how does ICMP deal with these situations during heavy traffic?
I upgraded from 9.04 to 10.04.1 so I am still using legacy grub.Anyway, I noticed with the update that the console is using the framebuffer and using VESA for high resolutions.I really don't like or want this feature. So I added vga=0 to get 80x25 and it works initially but just when the X server is running (Xubuntu in my case) I can see how the console switches to a high resolution again. After this, if I go to a console, lets say tty1, it is using again a high resolution instead of 80x25 (VGA). Is there a way to force the consoles to be in a lower resolution and keep it that way? It used to work fine in ubuntu 8.xx and 9.xx
It is now almost 3 'o clock in the night here, last 5 hours trying to fix this problem. CentOS always runs perfectly here. I am not a genie with the terminal so wanted to install GNOME on my vps, than vnc but didn't work and did a restart.
Now got this: Quote:[root@srv1 conf]# apachectl start Syntax error on line 41 of /etc/httpd/conf/httpd.conf: Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration
I have limited access to several servers (key based auth) but cron facility is not available for me. Those servers getting filled up by large apache logs and I have to login to each node manually and clean them each and every day.
I tried to write a script to run from login box but when i try that it looks like it is looking for logs in the local server (login box).
So current situation is:
How can i modify this so that the script in server1 will look for files in that server and zip them?
Google showed another command called rsh but in my env it is also not avil.
so this is really the result of another problem. There seems to be an issue with CPU spiking to 99% forever (until reboot) if I run apt-get or synaptic or update manager while an external USB drive is plugged in. Note, other USB peripherals are no problem, just an external HD.
So my work around was to eject the drive when doing apt-get or other installation work, then reattaching it to remount. Now, on to the present problem. I'm using the basic backup script (the rotating one) found in the Ubuntu Server manual. It uses tar and gzip to store a compressed version of the desired directories on my external USB. (which sits in a fire proof safe - this is for a business)
However, it seems tar and gzip which run nightly 6 days a week via cron as root, don't ever want to die, and they don't release the drive. I have to reboot the system (I can't logoff) to release the drive, unplug it, the I can do update/install work.
Of course, if apt etc. would work fine without conflicts with the external device, I'd not care about the tar/gzip problem other than it generally isn't a proper way for them to function and it chews up some CPU cycles. (they run about 0.6 and 1.7 percent respectively) I also can't kill them via kill or killall. They seem undead.
I have an interesting issue Ubuntu Server 8.04, The server has been running for quite a while (not designed or put together by me) but recently it has started segfualting and now will not boot apart from in read-only mode. I see the following errors in dmesg.
I have BT4 as an iso image and start it up by booting from cd, when i try the command root@bt~:startx it comes up with this fatal error. what can i do to get this work? and of course im new at this.
I needed Ubuntu server and recklessly picked Karmic. Hardware is some regular 775 mobo with integrated Intel graphics. Monitor is ASUS VH222D. Installation went smoothly but after that problems occurred. Shell is displayed in 1920x1080 resolution and fonts are so small almost unreadable. Grub2 looks OK, standard non-fb and so does few rows of text after loading grub but soon after that framebuffer becomes active.
dpkg-reconfigure console-setup doesn't mention resolution. Some articles are leading to grub2 gfxmode but none of manuals helped. I just cannot change grub2 menu resolution to anything else than standard console fonts (non-fb). Kernel option vga=XXX is no longer working.How to lower shell resolution? Why is this automatic???
Using Ubuntu server 10.04.2 64-bit all up to date.
I am running multi-threaded processes. These use OpenMP in my own code and the multi-threaded ACML maths library. When run in the foreground, everything is fine i.e. if I have set
export OMP_NUM_THREADS=8
then when I start all 8 cores are in use and things whizz along. However, when running overnight and logged out using e.g. 'at now + 1 minute' then the command, I am only getting about 130% CPU and it slows down accordingly. I have tried renice'ing and calling from within a bash script in case sh is doing something odd but nothing seems to solve it. I am sure that in the recent past this wasn't the case.
The libraries being used are shared versions in case that might have any bearing.
I am playing with my LAMP server1. why can i access a php file on the server only by typing http://serverip/file with no .php extension on?2. later i tryed to play with .htaccess, but when i uploaded it to the server it just disappeared, why is that
I tried to add my wife , and when I put in a password for her, this error comes up."Please set a valid user name consisting of a lower case letter followed by lower case letters and numbers." I did all that and I still can't set a password for her.
I have ubunto 10_4 X86-64 (I use putty to connect) installed apps screen mysql server-client my java program open jdk* 64bit apanche2 (the web server stuff) and its aVPS machine Xeon 2.0 64bit 4 GB ram
How can I make (or force) my java program to use more then 1 core? I would like it to be using just 5 of the 6 that I have. I use a .sh to run it this is the code for it. Code: #!/bin/bash cd "${0%/*}"; java -Xshare:auto -Xmx2662M -jar craft.jar
I have set the kernel selection time out to 3 seconds. However, sometimes it just hangs indefinitely on the kernel selection screen until I manually select the kernel. This is bad news since it is a VM started up from a script and _has_ to start automatically. The VM is a re-constructed cloud backup made by rsync and re-running grub.
Is there any way to force Ubuntu to NEVER, under any circumstances, not time out on kernel selection.
I have a 8.04 LTS server that i have installed a new 1TB drive. The server is running great but I am bit confused regarding the ln -s command and drive mounting. I have backuppc installed on the server and I am running out of storage space. To reolve this I moved the cpool and pool directories to the 1 TB drive and type from within the /var/lib/backuppc directory ln -s cpool /store/1TB/cpool this created the symbolic link to the new drive and everything works fine. I then rebooted the server and everything is runing fine but the drive does not show up in the df -h command, however the directories appear to be mounted fine.
I thought the drive would not be mounted automatically until it was defined in the fstab. Does the ln -s command force the system to automatically mount the directories but not the volume? This behaviour has caused me to delete my backup data becuase I was sure the disk was not mounted but is was!
I'm a bit lost with the PHP/Sendmail configuration, maybe somebody could help me getting back on the right track. Following situation:
Postfix:
* accepts smtp on port 25 but from his own domains. Some policy and spamchecks through amavisd are made.
* accepts submission on port 587 and 465 from authenticated users only. Quota and spamchecks prevent outgoing spam.
So I'm enforcing a very strong outgoing spam-policy but the users are still able to use the php mail() function to send spam through the /usr/sbin/sendmail command. My users have access to their own php.ini so my idea was to somehow enforce the delivery through the local postfix on port 587 or 465 and just let them enter their user/pass in their php.ini. (I suppose, their might be a cleaner-solution ).
Unfortunately, my configurations like smtp_host, port, user etc. are getting ignored if the sendmail_path line is active. But if I comment this line out, php just uses the default, which is the same as configured in the sendmail_path line - so it's active whether i use the line or not (setting it to an invalid command breaks the mail() function completely).
how can I enforce my anti-spam policy on the php mail() command?For my ssh users I just blocked the outgoing connection to localhost on port 25 which seems to work so far, but somehow the postfix-sendmail-wrapper just ignores this.
I have Webmin installed on an Ubuntu server. I currently have a successful apache server running on port 80, however I want to create a virtual host on port 81. When I try I go to servers->Apache Webserver-> Create Virtual Host I change the port to 81 and the document root to /var/port81www then I click create. How ever when I goto 192.168.1.5:81 (local ip, I know I have to port forward but its not even working local) it does not work.
I have a SSH server set up at home listening on port 22. I have hardened the server so it is pretty secure but I want to make it even safer by editing my iptables to rate-limit incoming connections and DROP false login attempts. I have tried these tutorials but I just cant get it to work:[URL]I want the debian-administration.org tutorial to work but when I try to add the first rule in terminal:sudo iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent --setI get the following:Bad argument --set'I am new to iptables and I'm not sure if I'm doing something wrong when I try to set it up. I'm using Ubuntu 10.04.1 LTS with iptables v1.4.4.
I recently got karoo broadband installed, the speed is normally 8mbps but the longer I am connected to my router the slower it gets, I have to disconnect and reconnect to the router to get back to full speed, is this due to a bad wireless driver or do I need new drivers or a better router.
I have do some research by google to find out How To I slow down the speed when playing movie! And I found this: [URL]. But I not sure how can I do it!!
I have got my Fedora12 setup working just beautifully... except the mouse!
A few months ago I invested in a very nice, expensive high performance mouse with very high resolution tracking. In Fedora 12 it is just too fast for me though, and I'm already well on the way to Carpel tunnel syndrome..
I have turned the sensitivity and acceleration down as far as they go but it is still too fast. The old ways of adjusting xorg parameters are, I understand it, now null and void due to the hotplugging HAL framework.
How to turn the speed of the mouse down on Fedora if the control panel doesnt turn it down enough.
This little Atom-based unit works fine with Linux, but its fan is running at full-speed all the time.So I figured maybe lm_sensors could ajust its speed dynamically, but "sensors", "fancontrol", and "pwmconfig" all fail to work, even though the modules required for this motherboard are up and running:
about every second time when I start up my Ubuntu 10.4, my hard drive starts to work heavily. While it's doing so I see in the system monitor that the gnome-settings-daemon has the status 'uninterruptible'. After about 3 minutes the hard drive calms down and the status of gnome-settings-daemon switches to 'sleeping'. so from that I guess the heavy working of the hard drive is somehow related to that process. Sometimes gnome-settings-daemon seems to crash completely so that my theme is gone.
So is there a solution to that? because it's annoying that it slows down the system for about 3 minutes after start up. On the net I've found some old threads from 2004 and 2006 which describe crashing of gnome-settings-daemon. But there didn't seem to be a solution.
As mentioned in another of my recent posts, I have set up a small network for my job, in which I am utilizing 4 Servers running Ubuntu 9.04 Jaunty Jackalope Gnome GUI and Ubuntu Server running underneath (yes I need a GUI for what I am doing). However, I am having a problem when bringing Ubuntu out of "locked" state, and any time that I am required to authenticate (i.e. sudo bash). After I type in my password, everything hangs for a good 3+ minutes.
To test this, I disconnected the Ubuntu Servers from the network, and I tried to authenticate again. With the server off the network, authentication did not hang, and the GUI came up in seconds. Also, I checked my syslog and memory utilization and both are well within acceptable range that they would not be causing sluggish performance.
With all of this said, I believe that there is some kind of resolution going on when trying to authenticate while the servers are connected to the network (i.e. Kind of like how doing a traceroute in a cmd prompt in Windows without the "-d" takes forever because of name resolution checking).I should also mention that my network DOES NOT have an internet connection by design.
This little Atom-based unit works fine with Linux, but its fan is running at full-speed all the time.So I figured maybe lm_sensors could ajust its speed dynamically, but "sensors", "fancontrol", and "pwmconfig" all fail to work, even though the modules required for this motherboard are up and running:
when calling 'top' dbus is having load of activity ('dbus-monitor --session') outputs also a lot of action (see attachement). this has effect that my machine is slowing down and normal work takes ages.I have no clue where the activity is coming from. I also had a look at https://bugs.launchpad.net/ubuntu/+s...or/+bug/441828. But there are no unnormal devices plugged in... (just keyboard + cordless mouse).when starting the machine the cpu load is ~9%. After a while (after 1 day) it goes up to 70-80%. the started applications do not change during that time. so i guess somewhere there is a leak.
I'm moving to Vietnam soon and was wondering if anyone knows of the best native linux way to circumvent the internet censorship there without slowing it down too significantly. I used to use proxy servers to access Wikipedia in China, but they were slow and unreliable. Not to mention, it was a pain to enable/disable it.
If there are users in a network who have desktop Linux (any variety), is there a way to configure their computers to "require" them to save documents to the network? Like for example, redirecting their /Home folders to a network file server or not allowing them to save files to their local hard drives?