Software :: Users Too Much For Server - Gauge Whether The Proxy Is Being Overloaded?
May 19, 2010
gauge when the server is getting overloaded with users. At present I run the server mainly as a proxy server with about 100 users. The bandwidth at the data centre is 100Mbps connection with total bandwidth used last month = 17431.16 MB I would like to add a VPN in future but feel that this might overload the bandwidth as instead of it just being web traffic it will the entire client TCP connections. I would like to monitor this before it gets to the stage where users are complaining but not sure how to gauge whether the proxy is being overloaded. It is used mainly for video traffic.
This is my quest: i need to install a box with a proxy server where useres can login, register what those users have been doing on-line and generate a dailly activaty report... I have been googleing for days and so far a tried out zeroshell (no reports) and untangle (no login)...
how to lock down individual users from setting a proxy server. Its a server not a WS so it should never go to the internet. I want to lock down the system side and firefox 5 settings.
I have configured the proxy server. I want to observer users's download information. What should I need to do? Shall I install squint? what is the process? How do I monitor the users?
Im trying to set up a Proxy server on my CentOS server and I have been looking at Squid, however I wondered if there is a proxy server that will support having authenticated users and passwords in a MySQL database?I wanted to do this so I have good control over who is connected through my proxy.
im running vps with debian lenny, with apache2 webserver, memcached, postgres and django-framework using mod-wsgi. Some of the pages served by django take long to generate (aprox 15 s), but there is also the memcached, compensating this.
My problem is that, when a robot visits the site, it starts traversing the site visiting all the pages and also the nongenerated, thus slowing it down to point where it is not responding.
What Im looking for is a solution to identify that the request comes from a robot (user-agent, ips, etc) and limit the resources, so that f.e. only one thread serves the robot etc...
I've got this problem for a few weeks and I cannot figure out. I'm pulling my hair out. I have a server installed PHP, lighttpd and redis. Sometimes, I got the following messages in the error log of lighty: Code: 2010-09-24 13:57:33: (mod_fastcgi.c.3011) backend is overloaded; we'll disable it for 1 seconds and send the request to anoth er backend instead: reconnects: 0 load: 567 2010-09-24 13:57:33: (mod_fastcgi.c.3011) backend is overloaded; we'll disable it for 1 seconds and send the request to anoth er backend instead: reconnects: 0 load: 626 and:
what it is that's overloading my web site.I'm running LAMP on Fedora 12, on an AMD 64 processor. My web site is relatively low volume; a good day is over 250 visitors, and most days it's below 200. I can't see anything there that would overload even a small box like mine.Several times a day -- perhaps 5 or 6, it's hard to say because I'm not always there -- I get flooded with requests. I run "top" in the background all the time to watch it, and what I see is the load average going through the roof -- I've seen the 1 minute figure go over 50 in about 3 minutes -- but the cpu numbers stay reasonably low, which I interpret as the system being I/O bound. The "top" display will show at least 40 or 50 http sessions in flight, with PID numbers spanning around 150 numbers slightly out of sequence, suggesting that the requests hit in close proximity but not precisely at the same time. These episodes can last up to 40 minutes before the system clears and the load avg goes back down to something sane, although I've had instances where I get a flurry of activity that lasts maybe 5 minutes, and the load avg goes no higher than about 20. The httpd log does not show any particular pattern of clients hitting my server. The requests appear to be historical pages from the blog (e.g. I notice requests for images from older pages, not the current front page.)
My best guess is that what I'm watching is Google or some other web caching service scanning my site for caching purposes. But I don't know. Maybe I pissed off some aggressive hacker (it's a political site) and he/she/it has figured out a way to periodically cause me grief from masked sites.I have two questions:
1) Does anybody recognize this pattern? Can you tell me what it is?
2) How can I streamline mysql and apache so these incidents don't cripple me for half an hour?
Currently my DHCP Server is working now what i want to have is auto detection of squid proxy in any browser but I still got an error in my dhcp server when I restart it.
My Config:
# DHCP configuration generated by Firestarter ddns-update-style interim; ignore client-updates;
I am trying to set up my squid3 proxy as a transparent proxy - right now, I have to manually configure browsers to access via proxy. I understand that I have to put some rules into Iptables and also some further directives in the squid.conf.
I have a couple of specific questions. The proxy server is running on a Ubuntu 10.04 workstation and this machine also acts as a dhcp server for the network. I have just one subnet , namely 192.168.0.1-254 There is only 1 network card. Is it much easier to put in a second network card or is it just as easy to configure the existing lan card as a dual IP?
Is it necessary to configure these 2 IP's ( whether they are via 2 lan cards or dual IP on single card ) to be on different subnets. i.e ETH0 192.168.0.1 and ETH1 192.168.1.1 or is ok to have something like ETH0 192.168.0.1 and ETH1 192.168.0.254 ( where ETH0 is the one facing the LAN and ETH1 points to the modem router / switch i.e The Internet ) Where specifically do I save the Iptables rule configuration file and what must I call it ?
I just noticed this issue today but when I unplug my laptop from its adapter, the battery loses 44% of its battery immediately, Is there any way of solving this?
I'm writing a script based on dialog and one of it's functions is to perform a restore with dd, of which I would like to show the progress by means of the "gauge" option of the "dialog" utility. I wanted to do this by sending a USR1 signal from time to time to the dd PID, because this results into the amount of bytes already being dd-ed ont the console screen. A simple calculation based on the orginal size would result into a percentage that can be fed to the gauge function of dialog.
The challenge I'm facing now is to launch an external script (I guess with "&" to send it to the background) that sends, let's say every 5 seconds a USR1 signal to the dd PID. That would be no problem, but how do I capture the output of this "kill -USR1" to the original script? I tested this and the output is not sent to standard ouput, so it must be that it is sent to standard error. For now I don't see a way out.
I am not sure whether it's possible or not. We running squid proxy server for our office. We restrict users using ACL to access the internet. There is some who do the followings:
1. Create a own proxy in there box who has the internet access.
2. Other users use those box as proxy and access to the internet.
My mom's Ubuntu machine has been rather strange the last day or so. It takes a while to turn on, delaying apparently at loading "powernowd".
GDM seems to load fine, but then logging in to Gnome takes a long time. The panels ususally crash, as well as the desktop, but eventually everything loads again.
Most programs take a long time to load; also, "sudo" takes a long time to request a password, but after that the "sudoed" command runs normally.
Gksudo never seems to show up, so I can only access programs like "Software Sources" via terminal (e.g. sudo program).
Strangely, Gnome System Monitor says the processor and memory are not completly full.
Does anyone know what this is? Are there any commands I can run to diagnose it?
What is the best way to determine in Linux if a multi-processor machine is overloaded? I thought load was a good measure but I run a large number of tasks which don't consume a lot of CPU but which drive up the load. A 4 processor machine has a load of 66 right now according to top for example but mpstat reports that the all CPU idle time is 89%.
I will be relocating to a permanent residence sometime in the next year or two. I've recently begun thinking about the best way to implement a home-based network. It occurred to me that the most elegant solution might be the use of VM technology to eliminate as much hardware and wiring as possible.My thinking is this: Install a multi-core system and configure it to run several VMs, one each for a firewall, a caching proxy server, a mail server, a web server. Additionally, I would like to run 2-4 VMs as remote (RDP)workstations, using diskless workstations to boot the VMs over powerline ethernet.The latest powerline technology (available later this year) will allow multiple devices on a residential circuit operating at near gigabit speed, just like legacy wired networks.
In theory, the above would allow me to consolidate everything but the disklessworkstations on a single server and eliminate all wired (and wireless) connections except the broadband connection to the Internet and the cabling to the nearest power outlets. It appears technically possible, but I'm not sure about the various virtual connections among VMs. In theory, each VM should be able to communicate with the other as if it was on the same network via the server data bus, but what about setting up firewall zones? Any internal I/O bandwidth bottlenecks? Any other potential "gotchas", caveats, issues? (Other than the obvious requirement of having enough CPU and RAM).Any thoughts or observations welcome, especially if they are from real world experience in a VM environment. BTW--in case you're wondering why I'm posting here, it's because I run Debian on all my workstations/servers (running VirtualBox as a VM for Windows XP on one workstation).
I have been using ubuntu for my college work for some time, and suddenly last week for ubuntu users in a specific Department/building lost the ability to connect to the internet through the school proxy.The problem seems to have effected only our department/building .
What is so annoying is even its the same computer we have internet if we use Windows but no internet for ubuntu.When using ubuntu the DHCP server automatically assigns IP's just as before, and we can reach the Default gateway using ping, but cant reach the proxy server. When we ping the proxy we get a message saying it refuses a connection. When using firefox or chrome we get the same message when we try to browse <proxy server not available>. The network guys says that the only change they did recently is to activate ipv6. But i fail to see how this can become a problem.
I wish all of my Internet connections will go through a proxy server. HTTP as well as FTP, and every other type of link. How can i do that? On top of that, is there a free ubuntu-users' public proxy list?
I've two internet based server ( xx.xx.xx.xx and yy.yy.yy.yy ) The Y server is running VNC server and is responsible for answering to VNC sessions. But I need to hide the IP of Y server so I want X server to be as VNC Proxy and redirect all VNC sessions to Y server.
I guess the best way is to use iptables but actually I can't get it working so
I have inherited a RedHat 5.4 server that is having an odd issue. Root and all of the user accounts can log in via SSH. Not a single account can log in via the console (sitting in front of the server). If I bring it up in single user made, I can log in with root all day long.
I want to say that this has something to do with PAM, but this is when I play my "Noob" card. Could anyone possibly steer me in the right direction to figure out what is going on?
Im trying to config my intranet to be accessible from inside the network (lan) without need of password and ask for a passwd for those who are viewing from Wan ....
Today my intranet can only be accessed from Lan, external access give me an Unauthorized message, I took look around, try #irc and still can get the appropriated help, I hope that someone here could help me on that...
Scenario:A - Local Unix machineB - socks proxy server port 1080C - remote mysql server port 3306I want to connect to the remote mysql server(C) from local unix machine(A) using sock proxy(B).
we need to log web access of a certain set of users for analysis. We decided to setup a proxy server which just logs all the requests but does not do anything else like caching/access control etc.All users will be using a fixed set of computers and hence we can redirect their requests to the proxy. I came across Squid, but found it to be too heavy for our requirements. Is there any other proxy-server software that is good enough for what we want or is Squid the only way?
I have a few friends that have seen me bypass firewalls with a socks proxy (SSH). I explained on how it works and how secure it is for browsing the Internet and checking your email in public places. I had at least 6 asked me if I could set up an account on my server for them and they would pay me! Now what I wanted to know was how I can set this up in a server and website where they can register an account and pay me through PayPal! I don't need help setting up the site! Just on how to set up the server to automate this. What tools are needed (ex. ISPConfig, jailkit.... stuff like that?) I don't mind doing this manually but if I get more people that would like this I don't really want to do every single one.
I want to make a transparent squid proxy server in centos. The squid proxy version is 2.6 stable. I made a normal squid server but want to make it transparent so that users do not need to enter the proxy settings in web browser. Even i searched about this on google but not getting it properly.I have two lan cards on centos system. ETH1 used for LAN and ETH2 used for WAN. And in this squid.conf i written "http_port 172.16.31.1:3128 transparent" and i also added a rule in iptables which is "iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j REDIRECT --to-port 3128" but still i have to enter proxy settings at client's web browser to use internet
We currently have a SUSE Apache2 reverse proxy server setup to reverse proxy (proxy_mod) our GroupWise Web Access server. Our SUSE box is located at www.domain.com. Our GroupWise Web Access server is located internally and is called GWMail. We are in the process of migrating from Novell to Windows, so we will have an exchange server with OWA access running on a Windows Server 2008 IIS7 box. That one will be called EXMail internally. Right now when someone goes to www.domain.com/gw/webacc it goes to the GWMail internal server from the outside world. This was all set up by previous techs who used Linux more.
We would like to set up reverse proxy to be able to reverse proxy to the Exchange Server from the outside world. Unfortunately the snag we are running into is that Exchange needs to run on port 443, and forwarding to port 443 has been a little tricky. I've read elsewhere we need to implement a generic TCP proxy, such as IPtables. what we need to do to get our SUSU Apache2 server to be able to reverse proxy to our Exchange server on port 443. For the save of argument lets call our SUSE server ExtranetServer. Below is our default-configuration.conf file's configuration: