way to get nginx to perform DNS lookups at regular intervals against hostnames that are defined for upstream servers? It seems nginx only performs a DNS lookup once, the first time it starts, and subsequently does not perform any other DNS lookups. This causes problems when the ip addresses of our upstream servers change.
I posted this same question in the nginx forum; however I also posted it here as it seems that not many of the posts there get answered.
I had a task to install nginx,i download the nginx-0.5.35.tar.gz and i extracted.I got following error when i ran these commands ./configure ----with-http_gzip_static_module & ./configure --with-http_geoip_module
I have an odd issue with postfix 2.5.5 trying to relay email internally to a range of mail servers and it keeps ignoring the transport map [ ] and instead always doing mx lookups.
Essentially the server is only allowing connections from an internal network and only for certain domains that it will relay to other mail servers.
It has no local delivery and yet every time I get email passed to it, it will check the local network DNS server for MX information or with diable_dns_lookups enabled (as below) the A record for the domain, and try to deliver to that instead of the transport map destination.
Here's the main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no disable_dns_lookups = yes # appending .domain is the MUA's job.
I have been looking for a while now, but I keep getting 403's for maps but not for files... So if you go to http://gmod.ws/ you get the error but if you go to http://gmod.ws/index.php you don't.I don't see where the problem is.We're running a CentOS 5.5 box.
I'm trying to get php-fpm running to use with nginx. I usually just use spawn-fcgi, but I would like to try php-fpm this time. I was hoping that it would be as easy as yum install php-fpm, but alas, that is not so. how to get php-fpm working on CentOS?
I run a web server using mandriva 2010.2 , I use webmin / virtualmin to facilitate the administration of virtual hosts, I've tried using nginx module for webmin that is in [URL] but for some reason the module could not be used by webmin.
As Pound is broken under Fedora 15, which I'm running on my Linux server, I decided to try out nginx and move the Pound configuration to something in nginx which does the same.However, the nginx configuration is a lot more complex than in Pound. What I try to do:
- nginx will listen on port 80 - requests for www.mydomain.nl => Domino HTTP server (on different server) - if this server is not reachable, go to another Domino HTTP server
My intention is to have EngineX on the frontend handling all static files, and Apache + mod_php in charge of handling PHP requests. I have optimized nginx to be as efficient as possible in handling static files, but have very little experience with Apache. Since Apache would only be processing PHP requests, would the standard Apache optimization guides suffice or would it best to configure it differently?
PS: this is a dedicated file server, the database is hosted separately.
I have VPS installed with Debian, NGINX, mysql, php and wordpress. By default the template gives 1 wordpress install in the /var/www/ directory. However, now I want to add more domains with wordpress to that VPS. I created a directory called /home/public_html/domain1.com and linked it to the /var/www/ directory. then I created another directory called /home/public_html/domain2.com and uploaded wordpress there. What I did next was edit my /etc/nginx/nginx.conf file with the following code:
Code:
user www-data www-data; worker_processes 4; events {
I have a perfectly working installation of nginx / PHP / fastcgi on the latest stable Debian distribution. No problems at all, apart from this one: When a PHP script (script A) is written to request a PHP script on the same web server (script B), nginx takes several minutes to respond and finally the connection times out. And it happens only when invoking script A through nginx. Calling it from command line works fine � I get a normal output of script B.
Literally, the test case is as simple as:
Script A:
PHP Code:
[code]....
I suppose the root of the problem may be some obstacle occurring when php5-cgi ends up invoking itself. And this is what happens when script A is called through nginx. But I have no ideas yet how to address the problem. One of my PHP applications checks itself during installation, that's why I need to request a PHP script from a PHP script on the same server.
I'm trying to set up my web server (nginx) as a catchall virtual host, as per an example that can be seen here: [URL].. (It's the Wildcard Subdomains in a Parent Folder example). Now, here's my issue. I use Wordpress on the coburndomain.org domain. I have pretty URLs enabled, that make my Wordpress articles look like this:[URL].. At the moment, nginx is reporting 500 Errors, saying that index.php is not a directory. What I want to do is make a rewrite rule that allows me to use the above URL example with nginx.
I followed this tutorial to do so: [URl].. , but I'm not sure how to apply it to my setup. Here's my configuration files from Debian Squeeze with Nginx onboard:
I've installed Nginx-full on a debian jessie server and want to install Wordpress, but for some reason Nginx-full isn't being seen as providing the httpd virtual package, so wordpress insists on installing apache.
According to everything I can find, nginx-full provides the httpd virtual package and this shouldn't happen. Is this a bug?
I work in IT, but networking is my weakest area.I'm getting very slow DNS lookups (60+ seconds with lots of page timeouts)in Firefox and Chromium on my Kubuntu laptop. Windows clients (xp and 7) work fine.
I'm running Ubuntu 11.04 and I'm really new to linux. My problem is that whenever I try to browse a site I notice the website loads very slowly because it takes a long time to do lookups. I installed Ubuntu with an onboard NIC and later switched to a PCI NIC (Dlink DGE-530T). Although I disabled the onboard NIC in the BIOS, it doesn't help. Could this conflict in configuration be a problem? My download rates are fine, its just lookups that take really long ( upto ~ 10 seconds). I know the PCI network card is fine because when I jump to Windows 7, lookups are normal again (~ 300ms). At first I thought about installing the sk98lin drivers for the PCI NIC but I saw a couple of places where people have mentioned that the skge driver that comes along with the kernel is better.
I have tried a system wide as well as Firefox disable of IPv6. Here is my /etc/udev/rules.d/70-persistent-net.rules Code: # PCI device 0x1186:0x4b01 (skge) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:24:01:14:eb:39", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x10de:0x0373 (forcedeth) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:1e:8c:3e:19:ed", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" The interface I want to use according to the listing above is the one with the MAC - 00:24:01:14:eb:39.
I tried removing one of the entries in the file above and rebooting but it still didn't work. Here is a look at my /etc/network/interfaces Code: auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 192.168.1.10 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255
p, li { white-space: pre-wrap; } Laptop connects to a (wired) ethernet port on a DLINK DIR-625 wireless router using dhcp. All works perfectly.
Using the same laptop connecting to the same wireless router, but using the wireless adapter and dhcp instead of wired ethernet, I can ping IP addresses on the LAN and also WAN IP's to/from anywhere on the net. I can perform reverse name resolutions (ip to host name), but not forward lookups (host names to ip addresses). I can use the DNS server obtained from dhcp or specify, by ip address, a DNS server to perform the lookups. This makes no difference.
Web pages (LAN server pages or from the internet) are not accessible by site address name or by ip address specifically.
We have bind 9.3 running on CentOS 5.2. We are able to do reverse lookups for the public IP's but not able to resolve to the private IP's on our network.
I actually have a server and a client.The client must connect to the server (via internet) to access to external websites. (You can see the attachment, maybe it's more clear )My actual problem is, I have configure Squid on my server, but I want to force SSL for the connection between the client and the server.I didn't really find nice tutorials about on that, maybe someone have an idea ? Or maybe some indications ?
I am currently using the following code in order to set a user's primary group in samba.Code: force group = +group.This almost does what I need but I was wondering if it is possible to list multiple groups. Something like this would be exactly what I need.
Code:#If user is in group1 set it as primary group, if in group2 set it as primary.force group = +group1, +group2. Does anyone know if this is possible or if I could use a script to force the primary group?
Looking to force eth0 & eth1 to 100MB Full Duplex/Auto Negotiation off but am having trouble.
When I run: "ethtool -s eth1 speed 100 duplex full autoneg off" manually, it works fine. However when I add it to the intefaces file it does not after several different configuration attempts:
/etc/network/interfaces:
# The loopback network interface auto lo iface lo inet loopback # The primary network interface #auto eth1 #iface eth1 inet dhcp
[Code].....
it seems my current configuration wont allow both eth0 and eth1 to be active at the same time. I would like them to be active-active if possible (if one goes down the other will service all traffice and vise versa).
It has been years since I had to mess with sendmail (I prefer using postfix) - but I inherited a server that someone else configured. This machine is a webserver - but is running sendmail for the various webforms, etc. I want to configure sendmail on this machine to route ALL outgoing messages to the main email server. This means local users too. I have read through sendmail configurations for the past 3 hours, but it's mostly greek. Here is my current sendmail.mc file - could some kind soul tell me what I need to change (and WHERE)?
I am new to ldap. I've installed openldap server in a centos but yet to test it. My question is how to force user to login to the system using ldap instead of non-ldap login? For example, I created some users in the ldap server, these users are exist in /etc/passwd, when ssh login to server as user, it normally authenticates through /etc/passwd file without being forced to use ldap.
I'd like to enable SSL authentication in vsftpd.conf but still somehow force plain data transfer; even if the client is capable of SSL data transfer. The way I understand the config, if I set ssl_enable=YES then if client wants to use SSL for data transfers, it can. I wish for force plain data transer, but still have SSL enabled for login. Is this possible with vsftpd?
I m running redhat 5.5 server on dell poweredge R910 on which we have Kvm running with 3 virtual machines. Now the server gets freeze's up frequently making us to force restart the server again. We have gone through the logs and browsed on several forums with the related errors but couldn't find any solution and also have checked on redhat site but was of no help.