Server :: Lighttpd: Backend Is Overloaded + Multiple Php-cgi Processes In D State?
Sep 24, 2010
I've got this problem for a few weeks and I cannot figure out. I'm pulling my hair out. I have a server installed PHP, lighttpd and redis. Sometimes, I got the following messages in the error log of lighty: Code: 2010-09-24 13:57:33: (mod_fastcgi.c.3011) backend is overloaded; we'll disable it for 1 seconds and send the request to anoth er backend instead: reconnects: 0 load: 567 2010-09-24 13:57:33: (mod_fastcgi.c.3011) backend is overloaded; we'll disable it for 1 seconds and send the request to anoth
er backend instead: reconnects: 0 load: 626 and:
I'm trying to configure lighttpd to send SCGI requests to different ports, depending on what file(s) are accessed. Is this possible? This is what I've tried, and it hasn't worked.
I've just installed the Netbook Edition of Lucid on my MacBook, and it works great, with two issues: First, each time I start, the computer is extraordinarily laggy (pretty much unusable), until I run System Testing for video, and go through the whole process. Then it seems to work great. Also, I'm having a 100% CPU issue, which switches between my two CPUs. There was nothing under "my processes" that showed any kind of heavy use. But, under "all processes" was something called Backend. I killed the process and now everything seems to be working just fine.
I have running license server on my server. Right now I would like to write small status script and check if software is running.My software include 3 deamons:
1) daemonA 2) daemonB 3) daemonC
My script should check, if each of this deamon is running. If all deamons are running then script should print short output: "License server is running" if one of this daemons is not running, output should "License server is not running". Is it possible to write small loop to check it ? Let say, loop will take new daemon name from deamons pool and will check if its running. Sometimes I need to check more than three daemons of one Program and I dont know how to write good script for this. Maybe somebody could help me with this loop that in the future I could also use; daemonD, daemonE, daemonF.etc.etc. if all daemons from pool is running then..."Software is running"
My server is really slow. When I did a top -c or ps aux, below shows up. Shouldn't there be only one? Shall I kill all those processes and leave only one?
My problem is, firefox (3.6.12 and 4.0 beta 7) freezes for seconds/minutes and enters in disk sleep state (the usb-storage process too). My system is an EeePC 701, openSUSE 11.3 x86, with the /home and swap partitions on a SD card.
what the procedure is to file bugs against slackware so I will post here. The rc.lighttpd I have works but the function to check if lighttpd is running has an exception when there's no lighttpd.
im running vps with debian lenny, with apache2 webserver, memcached, postgres and django-framework using mod-wsgi. Some of the pages served by django take long to generate (aprox 15 s), but there is also the memcached, compensating this.
My problem is that, when a robot visits the site, it starts traversing the site visiting all the pages and also the nongenerated, thus slowing it down to point where it is not responding.
What Im looking for is a solution to identify that the request comes from a robot (user-agent, ips, etc) and limit the resources, so that f.e. only one thread serves the robot etc...
gauge when the server is getting overloaded with users. At present I run the server mainly as a proxy server with about 100 users. The bandwidth at the data centre is 100Mbps connection with total bandwidth used last month = 17431.16 MB I would like to add a VPN in future but feel that this might overload the bandwidth as instead of it just being web traffic it will the entire client TCP connections. I would like to monitor this before it gets to the stage where users are complaining but not sure how to gauge whether the proxy is being overloaded. It is used mainly for video traffic.
I want to install GTK+. I see there are also numerous dependencies, which i've been slowly tackling, and the Cairo package has been particularly difficult. It claims the following upon ./configure --prefix=/usr configure: WARNING: Could not find libpng in the pkg-config search path checking whether cairo's PNG backend could be enabled... no configure: error: requested PNG backend could not be enabled I've done some searching and found that libpng.pc is in my /usr/lib/pkgconfig/ directory and that the following commands don't do the trick:
what it is that's overloading my web site.I'm running LAMP on Fedora 12, on an AMD 64 processor. My web site is relatively low volume; a good day is over 250 visitors, and most days it's below 200. I can't see anything there that would overload even a small box like mine.Several times a day -- perhaps 5 or 6, it's hard to say because I'm not always there -- I get flooded with requests. I run "top" in the background all the time to watch it, and what I see is the load average going through the roof -- I've seen the 1 minute figure go over 50 in about 3 minutes -- but the cpu numbers stay reasonably low, which I interpret as the system being I/O bound. The "top" display will show at least 40 or 50 http sessions in flight, with PID numbers spanning around 150 numbers slightly out of sequence, suggesting that the requests hit in close proximity but not precisely at the same time. These episodes can last up to 40 minutes before the system clears and the load avg goes back down to something sane, although I've had instances where I get a flurry of activity that lasts maybe 5 minutes, and the load avg goes no higher than about 20. The httpd log does not show any particular pattern of clients hitting my server. The requests appear to be historical pages from the blog (e.g. I notice requests for images from older pages, not the current front page.)
My best guess is that what I'm watching is Google or some other web caching service scanning my site for caching purposes. But I don't know. Maybe I pissed off some aggressive hacker (it's a political site) and he/she/it has figured out a way to periodically cause me grief from masked sites.I have two questions:
1) Does anybody recognize this pattern? Can you tell me what it is?
2) How can I streamline mysql and apache so these incidents don't cripple me for half an hour?
As an assignment i was doing a program to create two process using fork and pass messages between them using message queue.Did it worked well until my friend tried to copy it using scp.suddenly all hell broke loose as processes without ran syncronisation ie. in tech terms the process just wont wait wen a message queue is empty.it keeps on executing randomly.but after a reboot .. everything worked fine. until again i tried to do scp on my system on purpose. and again the program just went mad.
This is the second new install of 10.4 on the same machine with the same issue. After boot, as soon as user logs into the desktop, sys mon shows cpu at 100 percent and a steady climb in ram usage. several processes are spawned continuously until all ram is consumed and then moves on to use scratch space.
Using top, the process count moves into over a thousand total processes. Some investigation using top, ps, and digging into the /proc folder shows a ppid of 1 If the machine is booted to shell, top shows 120 processes and is stable. Some of the processes running repetitively are the gnome toolbar, nautilus, and I wish I was clear headed enough to write the others down before I left work. I can certainly get a more complete list in the morning.
I have swapped out ram, and the processor with no success. I have also tried apt-get purge ubuntu-desktop then installing with apt, this did not resolve it. As mentioned at the top of the post, this is the second install with these symptoms. The first install started showing the issue about 10 hours after first boot. On this second install, all was working fine for a couple days before this started in.
when I start my application it creates a message queue and forks a process. The child process reads multicast packets from the network and writes to message queue. The parent process reads packets from message queue and compares source ip and sequence number (it is part of payload) with last 64K packets received to see if it has received a duplicate packet. I am using message queue as a buffer because I do not want child process to drop any packets while it is comparing it with previously received packets. The message queue is large enough to contain 64K packets. To compare the old packets I am using array of structures as circular buffer. During a spike I may receive 100 - 120 packets per milli second.
When I run my application, the parent process keeps up with the child process, I can see that with "ipcs -q". After about 30 seconds it cannot keep up and the size of message queue keeps increasing until it is full. When I run "top" I can see that one CPU/core is hundred percent busy while other 7 cores are idle. It seems that both processes are running on same core and the child process gets interrupts everytime there is a packet on the net and starves the parent process.I am running RHEL 5. The system has 24GB memory and my application is the only application running on it. It is a HP G6 server.
I'm looking for a way in Perl to be able to take a list of servers, ssh multiple commands to it and store the results. If I do this process serially, sometimes one server will hang the whole script and if it doesn't, it still takes hours to complete.
I'm thinking what I need to do is make a parent loop that calls out a separate process that passes the server name to the child sub process and then executes all the commands I have defined in its own process. If one server 'hangs', at least that won't stop the script from doing all the other servers in the list.
I'm guessing using the fork() command would serve me best, however, all the online descriptions I have found have been vague at best.
Whenever I monitor my CPU's, it seems only the first is ever utilized, with the second always being at 0%.Does this mean it is not being used, or just not being reported as in use?Is there anything I could do to improve the situation if it is not being used as much as it could be?On Windows, I can assign processes to both cores, or either one. Is there a way to do something similar in Linux?
Lighttpd anti-hotlinking for images i just want these domain to link my imgages (test1.com,newtest2.cn,800keke.net,800org.com.cn),the other site will be redirect to [url].
lighttp configuration :
Code:
This configure onle effect to test1.com. no effect to (newtest3.cn,800keke.net,800org.com.cn. i still dont know why..
i installed yumex:yum -y install yumex, when i start yumex it came with this error:fatal error:backend-not-running backend not running as expected (yumex will close) how can i solve it?
my computer is often very slow, to the point of stalling. I tty'd in and when I ran ps -ef I noticed about 10 /usr/sbin/apache2 -k start I dont even want 1 apache running. Any suggestions why these are running, or how to stop it? Well, I can stop it with a sudo killall, but how can I make sure it doesnt happen again?
I have been searching in the forum and google but still not lucky enough to figure out yetI have a lighttpd server runningbecause apache consume so much CPU and memory) andqmailtoaster (just setup).Here is the configuration in cgi modules:
I have followed all the steps as mentioned on documentation.except automake (I am not clear as where to make automake)
I logged in as Root on a Ubuntu system and then
Code: cd /opt svn checkout svn://svn.lighttpd.net/lighttpd/branches/lighttpd-1.4.x/ cd lighttpd-1.4.x ./autogen.sh ./configure make make install
After this what do I need to do to be able to start the lighttpd ?After this I did not found any script /etc/init.d/lighttpd. so what more has to be done ?
I ran across the above article, which described a DoS attack in which requests are sent very slowly to the Web server. I'm running lighttpd 1.4.28 on a Gentoo Linux server, and I'm wondering if there is anything I could do in preparation to defend against such an attack.
A bug report [url] seems to indicate that there was a patch in place already against this sort of attack, but I wanted to be sure that was the same thing and if there was anything else I needed to do.
This is what I did till now and all was installed with success: yum update wget [URL] yum install lighttpd chkconfig --levels 235 lighttpd on /etc/init.d/lighttpd start Must I configure something else too? if yes... what?
I know that you can access and run any script of the web by wget:
Code:
wget mydomain.com/page.php
But this is literally accessing it externally through the web, i think that it is safer and faster to access the script internally. I am using lighttpd to host my php pages, and is there a way to do that? I have had some hosting experiences, the cronjobs on the hosts let u input:
setting up a basic apache webpage that would allow users to view files and folders on a server and transfer it from one directory to another? I know it doesn't sound very secure but this is an internal server so security is not a big issue as it is pretty secured in its network already. We have users that need to move files from one directory to another and we'd like to give them some intuitive interface to do this rather than having them log in to the linux system and run mv/cp commands. They are not very linux saavy.
I wrote a program that multiplies 2 matrices using multi-threads and another one using multiple processes and shared memory. Both in C.I need to find the total memory usage of these programs. I know of the top command, but when my matrices are relatively small they don't even show up on top because they complete so fast, how can I find the memory usage for these instances?Also, how can I find the total turnaround time of my programs
Error looking for next uid in sambaDomainName=sambaDomain,dc=DOMAINNAME:No such object at /usr/lob/perl5/vendor_perl/5.8.8/smbldap_tools.pm line 1194.why does this appear, Is there any configurations missing?