I have an nfs server that is getting tons of mount requests, and i suspect it's becoming a performance issue. Even when noone is in the office I see mount/unmount requests in my logs. Is this normal?
/var/log/messages:
Mar 30 10:32:42 morgan mountd[29643]: authenticated unmount request from uranus:757 for /morgan/users (/morgan/users)
I'm having an issue with a Samba server running on an Ubuntu "server". Technically, it's not a server, it's just an old desktop with Ubuntu 10.04 running it..and I have a few server processes running (ProFTP, Samba, etc.)The Ubuntu server is where I store all of my important files that get backed up to a separate hard drive. I shared folders via Samba, and I use two computers to access the shares. I access the shares with an .sh file I created that uses the mount cifs command to mount to those shares.
It has been working flawlessly for a long long time, up until recently. For the past few days to a week, I will try to mount the shares with no result. In the terminal, the commands just freeze, as if the command is trying to execute, but having network issues.The only way I can get it to work is if I reboot the Ubuntu server, then it maps flawlessly. But a day later, it's back to hanging up when trying to mount.
I've lately been getting some strange nfs mount requests for non existant users' home directories on a F14 machine to my file server (CentOS).The message log on the file server shows the following
May 23 03:10:53 data mountd[4835]: can't stat exported dir /export/home/httpd: No such file or directory May 24 03:21:13 data mountd[4835]: can't stat exported dir /export/home/httpd: No such file or directory May 25 03:26:53 data mountd[4835]: can't stat exported dir /export/home/httpd: No such file or directory
Squid document says that Squid accepts only HTTP requests but speaks FTP on the server side when FTP object are requested.
We call Squid HTTP and FTP caching proxy server. Does it also caches FTP contents? Is it possible to configure FTP clients to use Squid cache? When we make an FTP request to an FTP site via Squid will it be bypassed?
I have a home DNS server that has been working for some time today. Today I restarted to restarted it to clear the cache on it and now it refuses to answer and requests. Named starts fine with no errors. Here is named config file that worked for about 2 weeks fine and now doesn't want to work.
I'm trying to forward VNC requests from server a to server b, acctually I need the server a to be just VNC proxy and other servers behind server a can be responsible for VNC requests.I did it with this iptables rule but it didn't workAnd for notification all VNC sessions of mine are in range of 59100 to 59199
I'm just asking about a script (ex, bash script) that will let me know how many requests to each website on the server? So is there a way to get know from shell how many requests or connections to each web site on the server, in order to determine which website is under flood or DoS/DDoS attack.
Background:I have a small PC104 running opensuse 11.1. I'm writing a small client/server application for debugging purposes using mono and WCF. All the client does is make a request for information every 100ms.Problem:After about 20 requests the server quits responding to the client. Even if the client is running on the same machine. I've run the exact code on another laptop running opensuse as well as a laptop running windows and everything works great. Hopefully that closes the option that it is a code, mono, or opensuse flaw.Is there a kernel option or a network option that anyone knows about that might cause this sort of behavior?
We have a cisco ASA firewall at work,which redirects all http traffic to our webserver. We have to install a new website ,but it can't be installed to the same server. Setting up a squid reverse proxy can redirect the incoming http requests to the appropriate webserver? If yes, could I get some directions on howto?
I'm working on a thorny mod_rewrite problem. I have a mac connected to my LAN running MAMP (Mac/Apache/MySQL/PHP). I request a non-existent file:
Code: http://192.168.1.2:8888/careers/db/1.html I see this in the mod_rewrite log file:
Code: 192.168.1.102 - - [14/Nov/2009:13:46:07 --0800] [192.168.1.2/sid#807df8][rid#8ec850/initial] (2) init rewrite engine with requested uri /careers/db/1.html 192.168.1.102 - - [14/Nov/2009:13:46:07 --0800] [192.168.1.2/sid#807df8][rid#8ec850/initial] (1) pass through /careers/db/1.html Note that the requested uri is /careers/db/1.html
If I change just the file extension on my request to PHP like so:
Code: [URL]
Then the request uri is totally different now. Here's the rewrite log:
Code: 192.168.1.102 - - [14/Nov/2009:13:47:23 --0800] [192.168.1.2/sid#807df8][rid#8fc850/initial] (2) init rewrite engine with requested uri /Applications/MAMP/htdocs/careers/ 192.168.1.102 - - [14/Nov/2009:13:47:23 --0800] [192.168.1.2/sid#807df8][rid#8fc850/initial] (1) pass through /Applications/MAMP/htdocs/careers/ Note that the requested uri now has a full path which does not include the actual filename, /Applications/MAMP/htdocs/careers/
What the heck? More info. If I request [URL], I can actually access p1.php. The requested uri is /careers/db/p1.php. The problem appears to be because the filename starts with a number. I can also request [URL] and get thru to 1.php with requested uri /careers/db//1.php. Does mod_rewrite think /1 refers to a backreference or something? Why can apache handle the html file request properly and not the php file request?
I run a small home server (Debian 4), which acts as my gateway to the internet (ie, firewall) and runs a web server, dhcp, dns, and acts as a file server to the rest of the machines on my home network. Now I know it's never a smart idea to have all those services running on the same machine that is acting as a firewall, but I don't fancy running multiple servers just for home use, as it's mainly allowing me to learn system administration.
I noticed a few days ago that my internet had become unbearably slow, to the point where I could sometimes not load web pages. I spent a while searching through log files on my gateway, to try and find out what was eating up all of my bandwidth. When I came to apache's access.log file, I was confronted with this:
Multiple requests to my server, for totally random websites. I didn't even know it was possible to make those types of queries to a webserver. The only thing that is on the web server is a browser based torrent client. I have only shown a small snippet of the log file, but there are around 90k lines to different web addresses, from many different IPs. What I want to know, is what is happening? :S Why is someone querying MY web server, for web sites totally unrelated to it? And most of all, how can I stop it. My initial was to try and use iptables to block multiple requests from the same ip within a certain time frame, which I think would work as the server shouldn't really get many queries from external networks.
On my server I provide OCR file conversion service but the problem is when a user uploads a file and it's being converted then if you open another tab and try to load the site it won't respond until that conversion is completed. In other words until the PHP script finishes execution apace doesn't serve any other request to the same browser.
Here is my apache configuration:
Code:
ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid
[code]....
You can check what I mean if you try to upload and convert a file and while the file is converting try opening the site in another tab.
I've had a VPS running Ubuntu 9.10 x64 server, hosting 3 websites of mine for a few months now. This problem has been happening for a while. Every once in a while, probably every 2 or 3 days, I'll wake up in the morning, and apache won't be responding, no web pages will load. /etc/init.d/apache2 status, reports that apache is functioning properly. Every time I simply have to restart the daemon and things run fine for another few days.
I thought maybe it was a memory issue, so I lowered the MaxClients in the prefork module from 50 to 30 a few days ago, but the same thing is still happening. My VPS has 512MB of ram, burstable to 1GB, and according to Virtuozzo, there was only one night of high traffic where I even came close to that soft limit. I've checked my syslog, and there's absolutely nothing in there about apache. I've checked apache's error.log as well, and there's nothing in there that would indicate a problem either.
For one project I use a web hosting service. I wanted the entire site to be https, so I bought a service from them in which they automatically install a trusted cert so people can access the site through https protocol. Since http is still available, though, I need to do automatic rewrites or something to change http into https requests. (I don't have access to their Apache server configuration files or anything like that.)I found on the net this code to add to my .htaccess file:
Code: RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
I have a perfectly working installation of nginx / PHP / fastcgi on the latest stable Debian distribution. No problems at all, apart from this one: When a PHP script (script A) is written to request a PHP script on the same web server (script B), nginx takes several minutes to respond and finally the connection times out. And it happens only when invoking script A through nginx. Calling it from command line works fine � I get a normal output of script B.
Literally, the test case is as simple as:
Script A:
PHP Code:
[code]....
I suppose the root of the problem may be some obstacle occurring when php5-cgi ends up invoking itself. And this is what happens when script A is called through nginx. But I have no ideas yet how to address the problem. One of my PHP applications checks itself during installation, that's why I need to request a PHP script from a PHP script on the same server.
I upgraded to 8.04 on AMD64 machine. I started to upgrade to 10.04 LTS using the System -> Admin -> Update manager. After packages started installing began getting MANY error popup windows saying dpkg exiting with errors. Looking at the terminal window I see a lot of "cannot utime" and it appears to be some error with tar. I saw on another forum that Lucid has a bug with tar. Now the distribution upgrade has just locked. What should I do?
I have installed bootpd-dd2 and enabled and configured it with xinetd. 1. Made sure that their is a bootptab file and it is configured. 2. Tested the bootpd is working by runing the command /usr/sbin/bootptest cmdbfs .3. tail -f the /var/log/messages and saw the requests from the test.4. rebooted a machine that is configured to pxe boot.there is no messages by bootpd when a request is made.request is picked up by network monitor on a seperate computer on the proper udp port
I recently set up a new web/file server with 9.10 server x64 with 2 NICs and I am trying to configure eth0 to respond to my LAN for internal samba filesharing and eth1 to handle website/ftp requests on my static IP, but whenever eth0 is up the server is not accessible at 173.XX.165.65 for web or ftp but both work fine at 10.1.10.100. When eth0 is down, public IP works fine. I have set /etc/network/interfaces like this:
Code: # The primary network interface auto eth0 iface eth0 inet static address 10.1.10.100
A Linux (CentOS5.3) server is setup with apache reverse proxy. The reverse proxy server is opened to outside and an internal server is mapped to ProxyPass configuration. SSL certificate is also installed on the Apache reverse proxy server. The problem is, it is extremely slow in serving http requests through reverse proxy. There is no problem with server resources or bandwidth. When the internal server is directly accessed through Internet, there is no delay. The backend server and the reverse proxy server are also on the same switch (same subnet). When I searched the Net, there were recommendations to enable cache in Apache. I did so as follows in httpd.conf.
But still there is no progress. Do I want to enable cache in ssl.conf too? Or is there any other workaround to speed up Apache reverse proxy. Is there a way to check that caching is happening?
I'd like to announce the release of cmus 2.3.0 here. cmus 2.3.0 features gapless MP3 playback, native PulseAudio output plugin (which cures all the problems with PA ALSA emulation present in 2.2.0), very fast metadata cache and much improved compilations handling.
Not to mention tons of bugfixes since 2.2.0, which was released almost 3 years ago.[URL]..
This is no huge problem, but it is rather annoying to me. I am using the 10.4 beta, and whenever I get a large update (like a updated kernel) GRUB adds another boot option to the menu (I'm dual booting Vista and Ubuntu.) So my GRUB menu looks something like this when I turn my computer on:
GNU GRUB version 1.98-1ubuntu2 Ubuntu, with Linux 2.6.32-18-generic Ubuntu, with Linux 2.6.32-18-generic (recovery mode)
1 - Make a clean install of Squeeze. 2- Upgrade to sid 3- Run this command after installing xserver:
aptitude install kdm kde-10n-es kdebase
Then aptitude installs kdm, kdebase... and brasero, metacity, gvfs, gnome-control-center, evolution... (¿?) I only want to install KDE, why aptitude also install all this gnome crap ? And I say crap because I only want to install the KDE desktop, I use Gnome in my laptop =/ If you uncheck install recommends in aptitude you can make a clean KDE install.
using 'apache2ctl status' when my server is slow i've noticed that it seems to max out at '128 requests currently being processed' causing my site to have page loads of upto 20+ seconds (normal requests is about 30-50) and when it gets lower my server seems to go back to normal speeds so i'm wondering is there a way to increase that as my server more then has the resources to do soserver specs:Quote:AMD Athlon 64 X2 4800+ 2.5GHz)4GB DDR2 RAM100Mbits UnmeteredUbuntu Server 8.10apache2 settings:Quote:
Back in April I set up a Ubuntu DHCP server and a multiple VLAN network [URL] to migrate our various servers, workstations, etc off the 192.168.1.1 /24 network that everything was on because we where running out of address space. I built out the new network and everything worked great except our AD server would never get an IP address from the DHCP server (static reservation) and even if I set the IP statically on the AD server it couldn't ping the gateway and noone could log in. After several attempts to resolve this, including bringing in outside help, we where never able to figure out what the problem was.
Now 6 months later I have time to revisit the issue without effecting the live network. I used Acronis and imaged the AD server last Friday, cloned it on to another box with the same hardware, and put it up on the new network that's been sitting unused for the last 6 months. Today when I statically set the IP on the AD server (which is what I want) it connects and I can ping it's gateway 192.168.1.1 and all the way across vlans to a test sales agent workstation at 192.168.8.xxx on vlan 800 but only if I statically assign the agents station an IP address. When I try to get an IP address via DHCP it fails as destination unreachable. Nothing has changed in the last 6 months on the DHCP server but now it for some reason can't ping its default gateway 192.168.1.1. All of the config files are the same as they where left from the post linked above aside from the vlan id's used where changed from 1's to 100's (i.e. vlan 3 is now vlan 300) /etc/network/interfaces
Code:
auto lo iface lo inet loopback auto vlan100 iface vlan100 inet static
[code]....
why it can't reach the gateway, when I do a tcpdump I can see the DHCP requests come in on eth0 but the server never responds and I'm pretty sure its because it isn't "seeing" them since it thinks there isn't a network connection but I don't know how to trouble shoot to find out where the problem lies.
Back in April I set up a Ubuntu DHCP server and a multiple VLAN network [URL] to migrate our various servers, workstations, etc off the 192.168.1.1 /24 network that everything was on because we where running out of address space. I built out the new network and everything worked great except our AD server would never get an IP address from the DHCP server (static reservation) and even if I set the IP statically on the AD server it couldn't ping the gateway and noone could log in. After several attempts to resolve this, including bringing in outside help, we where never able to figure out what the problem was.
Now 6 months later I have time to revisit the issue without effecting the live network. I used Acronis and imaged the AD server last Friday, cloned it on to another box with the same hardware, and put it up on the new network that's been sitting unused for the last 6 months. Today when I statically set the IP on the AD server (which is what I want) it connects and I can ping it's gateway 192.168.1.1 and all the way across vlans to a test sales agent workstation at 192.168.8.xxx on vlan 800 but only if I statically assign the agents station an IP address.
When I try to get an IP address via DHCP it fails as destination unreachable. Nothing has changed in the last 6 months on the DHCP server but now it for some reason can't ping its default gateway 192.168.1.1. All of the config files are the same as they where left from the post linked above aside from the vlan id's used where changed from 1's to 100's (i.e. vlan 3 is now vlan 300) /etc/network/interfaces
Code:
auto lo iface lo inet loopback auto vlan100
[code]....
why it can't reach the gateway, when I do a tcpdump I can see the DHCP requests come in on eth0 but the server never responds and I'm pretty sure its because it isn't "seeing" them since it thinks there isn't a network connection but I don't know how to trouble shoot to find out where the problem lies.
server: LAMP - debian, apache2, mysql, php5. a bit info on my network: There is a another service here that already uses port 443 already. It made my website time out, hence the move to another port. PLus, i dont want the 2 services sharing the port. What I am trying to do is forward 443 requests to another port where the SSL service is running so I can hide my port number in the URL.
I've tried to get an opensuse box I have to share a directory via NFS. I've failed each time, but I thought that the third time, I'd enlist some help from the forums, if I could. how do I know that the nfs server and not the client is the problem? Short answer is: I don't! That's why nfs (and many netwrk problems) are laborious, you're troubleshooting needs to take place at both source and desitination. Next question, what do I have set up so far? Well, I did download the nfs server kernel stuff (two months back) and /etc/init.d/nfsserver start seems to get set up OK. No errors and the daemons nfsd, idmapd, mountd area all running. So, I *think* that part is OK. I have the share set up properly in /etc/exportfs and have "exportfs -r" it.
OK, now onto the trickier stuff: the client and iptables. On the client pinging to the nfserver box is perfect, and I have rpcbind running. the reported error is "mount.nfs: mount system call failed" though from experience nfs errors don't mean a whole lot.However, I will go off and check now and see if I need a mountd running on client-side too.Then there's iptables .... ouch, that could be a long and painful trek. I don't see any specific ports being blocked, and it's the iptables that the default v11.2 opensuse came with. I did turn them off and the problem was the same, so whether wishfl thinking or not, I'm hoping it's not an iptables issue.