Server :: Tuning Apache To Handle PHP With NginX On The Frontend
Jun 24, 2010
My intention is to have EngineX on the frontend handling all static files, and Apache + mod_php in charge of handling PHP requests. I have optimized nginx to be as efficient as possible in handling static files, but have very little experience with Apache. Since Apache would only be processing PHP requests, would the standard Apache optimization guides suffice or would it best to configure it differently?
PS: this is a dedicated file server, the database is hosted separately.
I'm running a linux cloud server with the following config 1.2ghz Processor allocation 752MB Ram
The site loads slow and clicking a link almost freezes the page for a second. Also, the page loads could be much faster. We've been running mysqltuner and have pretty much optimized all slow queries. Is there anything we can do to fine tune the server for faster and more responsive?
I've been trying to set-up/tweek these param's in Apache to a setting suitable to the server for the amount of memory. When i look around some people say, hey.. just look at the memory used per process and then divide that by the amount of memory available and you get the number of processes that can be handled by Apache in one go before it starts swapping.
Well, for this i'd done this calculation and for me it turned out to be 200 approx concurrent connections. Well funny thing is, out MySQL server had a slow down so the Apache servers were running at approx 450 concurrent connections and weren't swapping in memory etc (still had 600MB free not including what was available in Buffers/Cache - 'free -m')... Thus if i had have set the limits to 200 then people would have been not getting to the site, so i'm kind of pleased that i happened to not have the time to set this yet.
I was just wondering what the best approach is to allow mobile device sites on my web server is?Its an apache 2 server running on Centos and I am just very curious as to how to best present a site just in standard output for a mobile site, ie no server side programming or scripts (but would like to do some eventually).I actually have someone asking for someone on his development team and would love to be that person with that attribute lol.
'm using Debian 6 (Squeeze) and I'm trying to migrate away from Apache to Nginx, but my Nginx installation seems to be showing some weird behavior.
aptitude install nginx-extras
/etc/init.d/apache2 STOP /etc/init.d/nginx START
the configuration file /etc/nginx/global.conf syntax is ok [alert]: mmap(MAP_ANON|MAP_SHARED, 33554432) failed (28: No space left on device) configuration file /etc/nginx/global.conf test failed
I had a task to install nginx,i download the nginx-0.5.35.tar.gz and i extracted.I got following error when i ran these commands ./configure ----with-http_gzip_static_module & ./configure --with-http_geoip_module
we having more than 5000 users and will have 7 squid proxy servers with high end configuration upto 4gb ram n 320*5 HD in rhel4&5 most of them complaining that at peak hours their browsing speed is slow but we are having 1gbpgs link at peak hours i.e when established connections r flowing more than 550 browsing gets slow how to do fine tuning are squid is only responsible to access upto 600 connections
way to get nginx to perform DNS lookups at regular intervals against hostnames that are defined for upstream servers? It seems nginx only performs a DNS lookup once, the first time it starts, and subsequently does not perform any other DNS lookups. This causes problems when the ip addresses of our upstream servers change.
I posted this same question in the nginx forum; however I also posted it here as it seems that not many of the posts there get answered.
I have been looking for a while now, but I keep getting 403's for maps but not for files... So if you go to http://gmod.ws/ you get the error but if you go to http://gmod.ws/index.php you don't.I don't see where the problem is.We're running a CentOS 5.5 box.
I'm trying to get php-fpm running to use with nginx. I usually just use spawn-fcgi, but I would like to try php-fpm this time. I was hoping that it would be as easy as yum install php-fpm, but alas, that is not so. how to get php-fpm working on CentOS?
I run a web server using mandriva 2010.2 , I use webmin / virtualmin to facilitate the administration of virtual hosts, I've tried using nginx module for webmin that is in [URL] but for some reason the module could not be used by webmin.
As Pound is broken under Fedora 15, which I'm running on my Linux server, I decided to try out nginx and move the Pound configuration to something in nginx which does the same.However, the nginx configuration is a lot more complex than in Pound. What I try to do:
- nginx will listen on port 80 - requests for www.mydomain.nl => Domino HTTP server (on different server) - if this server is not reachable, go to another Domino HTTP server
I have an enormous quadcore machine with 16gb ram and dual gigabit NICs. It used to be for MySQL but we have upgraded the whole database infrastructure so now this server is left floating. I had the great idea of turning this into a reverse-proxy (using apache mod_proxy) and it really handles a ton of requests. But I have a feeling that we are not getting the most use out of what it can offer.
Our traffic consists of a few thousand very small (less than 10 byte) ajax calls per second, and frequently I find we are running out of kernel allocated network stack to handle all the requests. Often we get the kern.log warning "possible SYN flooding on port 80. Sending cookies." and other things like this. Obviously we are not getting SYN flooded, we just have very high demand.
So far I have found a few kernel tuning guides to tell the kernel to allocate more of the base system memory for networking but every guide I have found has been for the purpose of increasing the performance between WAN links (direct backbones between offices etc) and usually with very large file sizes being the priority. One such example (and great) write up is here:
cyberciti.biz/faq/linux-tcp-tuning/
I was hoping some people could provide further input, such as along the lines of disabling nf_conntrack (to speed up socket set up/tear down time) or anything that will speed up a high throughput proxy like mine. Any links to studies or benchmarks between different configurations or hardware gets extra points!
I want to make a FTP server with web frontend for administering it, eg creating users, groups, setting which directories users can access and so on.I found some web frontends for proFTPd, like those URL... , bus all of them seems to be very old and doesn't work nicely. I tried ProFTP Administrator and it won't work.Do you know some solutions to manage ftp server users, groups, permissions over web? Or by some frontend program.
I have VPS installed with Debian, NGINX, mysql, php and wordpress. By default the template gives 1 wordpress install in the /var/www/ directory. However, now I want to add more domains with wordpress to that VPS. I created a directory called /home/public_html/domain1.com and linked it to the /var/www/ directory. then I created another directory called /home/public_html/domain2.com and uploaded wordpress there. What I did next was edit my /etc/nginx/nginx.conf file with the following code:
Code:
user www-data www-data; worker_processes 4; events {
I have a perfectly working installation of nginx / PHP / fastcgi on the latest stable Debian distribution. No problems at all, apart from this one: When a PHP script (script A) is written to request a PHP script on the same web server (script B), nginx takes several minutes to respond and finally the connection times out. And it happens only when invoking script A through nginx. Calling it from command line works fine � I get a normal output of script B.
Literally, the test case is as simple as:
Script A:
PHP Code:
[code]....
I suppose the root of the problem may be some obstacle occurring when php5-cgi ends up invoking itself. And this is what happens when script A is called through nginx. But I have no ideas yet how to address the problem. One of my PHP applications checks itself during installation, that's why I need to request a PHP script from a PHP script on the same server.
I'm trying to set up my web server (nginx) as a catchall virtual host, as per an example that can be seen here: [URL].. (It's the Wildcard Subdomains in a Parent Folder example). Now, here's my issue. I use Wordpress on the coburndomain.org domain. I have pretty URLs enabled, that make my Wordpress articles look like this:[URL].. At the moment, nginx is reporting 500 Errors, saying that index.php is not a directory. What I want to do is make a rewrite rule that allows me to use the above URL example with nginx.
I followed this tutorial to do so: [URl].. , but I'm not sure how to apply it to my setup. Here's my configuration files from Debian Squeeze with Nginx onboard:
Does anybody have experience with linux tuning. I`m realy interesting about sysctl.conf tuning settings for batch(3d rendering, phisycs simulations, etc.) servers. Does anybody has an experience with linux tuning - I mean memory and CPU settings for heavy loaded systems. What kind of settings You have in Your clusters ? I`m working with Red Hat Enterprise 5 x86_64.
I've installed Nginx-full on a debian jessie server and want to install Wordpress, but for some reason Nginx-full isn't being seen as providing the httpd virtual package, so wordpress insists on installing apache.
According to everything I can find, nginx-full provides the httpd virtual package and this shouldn't happen. Is this a bug?
I am trying to solve a problem where Apache stats aren't displaying correctly in Munin. I've ran through quite a bit of checks and tests regarding Munin setup, but I think my issue is related to Apache, but my skill set there is lacking.
first, system info: monitored server: CentOS 5.3 2.6.18-128.1.1.el5
I am upgrading my server and I have a lot of sites. Since I cannot take my server down for a few days, maybe a week until I manage to migrate all the sites to the new machine, I figured I could migrate them one by one. After migrating one, I would somehow tunnel the requests of that name virtual host to my internal machine. When everything is migrated, I would then switch the machines, update ip's and stuff and everything will work just fine.
However I cannot seem to find a way to do this tunneling. is this at all possible? If not, what alternatives do I have?
what kind of dist/software would you recomend to use for a a vpn server that can handle 10 diffrent nets each seperated from the other if i connect with user1 i get on net1 and user2 gets on net2 the vpn server is always connected to the other location at all time i just want to be able to conenct in to my the net i want to the reason i dont want to go Destination is that the vpn server is gonna handle otherstuff that the nets will be conencted to input
From my main Postfix SMTP heads, I am sending just a couple select emails (primarily support emails) off to a server that receives them and pipes them into the support software. So far this totally works perfectly and I am pretty happy with the configuration. However, in order for sendmail on the support server to receive those emails I have to place them in the virtusertable of course, but I also have to activate their domain in the local-host-names file. That then causes sendmail to consider itself as the destination server for that whole domain.Is there a way to make sendmail receive email for select addresses without making it think it's the server for the whole domain? This server is only receiving email from two specific smtp servers, so I wonder if I could just permit relaying? Wonder if that would just cause a giant loop though.
How do clients handle offline syslog servers?Will the log files be buffered locally to be sent to the syslog server when it comes back online, or will any log data generated during downtime be lost in cyber space?